docker pull envoyproxy/envoy:v1.19.0
docker pull curlimages/curl
docker pull mccutchen/go-httpbin
docker images
✅ 출력
REPOSITORY TAG IMAGE ID CREATED SIZE
curlimages/curl latest e507f3e43db3 13 days ago 21.9MB
mccutchen/go-httpbin latest 18fc7a0469d6 2 weeks ago 38.1MB
envoyproxy/envoy v1.19.0 f48f130ac643 3 years ago 134MB

docker run -d -e PORT=8000 --name httpbin mccutchen/go-httpbin
# 결과
8db4f5f5a1971082e620c4ec023b4799f4b6d1d73800c273df5ce683d49c78d9
docker ps
✅ 출력
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8db4f5f5a197 mccutchen/go-httpbin "/bin/go-httpbin" About a minute ago Up About a minute 8080/tcp httpbin
docker run -it --rm --link httpbin curlimages/curl curl -X GET http://httpbin:8000/headers
✅ 출력
{
"headers": {
"Accept": [
"*/*"
],
"Host": [
"httpbin:8000"
],
"User-Agent": [
"curl/8.13.0"
]
}
}
/headers 엔드포인트 호출 시 사용된 헤더가 함께 반환됨docker run -it --rm envoyproxy/envoy:v1.19.0 envoy --help
✅ 출력
USAGE:
envoy [--enable-core-dump] [--socket-mode <string>] [--socket-path
<string>] [--disable-extensions <string>] [--cpuset-threads]
[--enable-mutex-tracing] [--disable-hot-restart] [--mode
<string>] [--parent-shutdown-time-s <uint32_t>] [--drain-strategy
<string>] [--drain-time-s <uint32_t>] [--file-flush-interval-msec
<uint32_t>] [--service-zone <string>] [--service-node <string>]
[--service-cluster <string>] [--hot-restart-version]
[--restart-epoch <uint32_t>] [--log-path <string>]
[--enable-fine-grain-logging] [--log-format-escaped]
[--log-format <string>] [--component-log-level <string>] [-l
<string>] [--local-address-ip-version <string>]
[--admin-address-path <string>] [--ignore-unknown-dynamic-fields]
[--reject-unknown-dynamic-fields] [--allow-unknown-static-fields]
[--allow-unknown-fields] [--bootstrap-version <string>]
[--config-yaml <string>] [-c <string>] [--concurrency <uint32_t>]
[--base-id-path <string>] [--use-dynamic-base-id] [--base-id
<uint32_t>] [--] [--version] [-h]
Where:
--enable-core-dump
Enable core dumps
--socket-mode <string>
Socket file permission
--socket-path <string>
Path to hot restart socket file
--disable-extensions <string>
Comma-separated list of extensions to disable
--cpuset-threads
Get the default # of worker threads from cpuset size
--enable-mutex-tracing
Enable mutex contention tracing functionality
--disable-hot-restart
Disable hot restart functionality
--mode <string>
One of 'serve' (default; validate configs and then serve traffic
normally) or 'validate' (validate configs and exit).
--parent-shutdown-time-s <uint32_t>
Hot restart parent shutdown time in seconds
--drain-strategy <string>
Hot restart drain sequence behaviour, one of 'gradual' (default) or
'immediate'.
--drain-time-s <uint32_t>
Hot restart and LDS removal drain time in seconds
--file-flush-interval-msec <uint32_t>
Interval for log flushing in msec
--service-zone <string> # 프록시를 배포할 가용 영역을 지정
Zone name
--service-node <string> # 프록시에 고유한 이름 부여
Node name
--service-cluster <string>
Cluster name
--hot-restart-version
hot restart compatibility version
--restart-epoch <uint32_t>
hot restart epoch #
--log-path <string>
Path to logfile
--enable-fine-grain-logging
Logger mode: enable file level log control(Fancy Logger)or not
--log-format-escaped
Escape c-style escape sequences in the application logs
--log-format <string>
Log message format in spdlog syntax (see
https://github.com/gabime/spdlog/wiki/3.-Custom-formatting)
Default is "[%Y-%m-%d %T.%e][%t][%l][%n] [%g:%#] %v"
--component-log-level <string>
Comma separated list of component log levels. For example
upstream:debug,config:trace
-l <string>, --log-level <string>
Log levels: [trace][debug][info][warning
|warn][error][critical][off]
Default is [info]
--local-address-ip-version <string>
The local IP address version (v4 or v6).
--admin-address-path <string>
Admin address path
--ignore-unknown-dynamic-fields
ignore unknown fields in dynamic configuration
--reject-unknown-dynamic-fields
reject unknown fields in dynamic configuration
--allow-unknown-static-fields
allow unknown fields in static configuration
--allow-unknown-fields
allow unknown fields in static configuration (DEPRECATED)
--bootstrap-version <string>
API version to parse the bootstrap config as (e.g. 3). If unset, all
known versions will be attempted
--config-yaml <string>
Inline YAML configuration, merges with the contents of --config-path
-c <string>, --config-path <string> # 설정 파일을 전달
Path to configuration file
--concurrency <uint32_t>
# of worker threads to run
--base-id-path <string>
path to which the base ID is written
--use-dynamic-base-id
the server chooses a base ID dynamically. Supersedes a static base ID.
May not be used when the restart epoch is non-zero.
--base-id <uint32_t>
base ID so that multiple envoys can run on the same host if needed
--, --ignore_rest
Ignores the rest of the labeled arguments following this flag.
--version
Displays version information and exits.
-h, --help
Displays usage information and exits.
envoy
설정 파일을 전달하지 않고 Envoy를 실행하려다 발생한 에러 확인
docker run -it --rm envoyproxy/envoy:v1.19.0 envoy
✅ 출력
[2025-04-19 11:09:34.147][1][info][main] [source/server/server.cc:338] initializing epoch 0 (base id=0, hot restart version=11.104)
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:340] statically linked extensions:
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.filters.listener: envoy.filters.listener.http_inspector, envoy.filters.listener.original_dst, envoy.filters.listener.original_src, envoy.filters.listener.proxy_protocol, envoy.filters.listener.tls_inspector, envoy.listener.http_inspector, envoy.listener.original_dst, envoy.listener.original_src, envoy.listener.proxy_protocol, envoy.listener.tls_inspector
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.filters.udp_listener: envoy.filters.udp.dns_filter, envoy.filters.udp_listener.udp_proxy
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.clusters: envoy.cluster.eds, envoy.cluster.logical_dns, envoy.cluster.original_dst, envoy.cluster.static, envoy.cluster.strict_dns, envoy.clusters.aggregate, envoy.clusters.dynamic_forward_proxy, envoy.clusters.redis
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.filters.http: envoy.bandwidth_limit, envoy.buffer, envoy.cors, envoy.csrf, envoy.ext_authz, envoy.ext_proc, envoy.fault, envoy.filters.http.adaptive_concurrency, envoy.filters.http.admission_control, envoy.filters.http.alternate_protocols_cache, envoy.filters.http.aws_lambda, envoy.filters.http.aws_request_signing, envoy.filters.http.bandwidth_limit, envoy.filters.http.buffer, envoy.filters.http.cache, envoy.filters.http.cdn_loop, envoy.filters.http.composite, envoy.filters.http.compressor, envoy.filters.http.cors, envoy.filters.http.csrf, envoy.filters.http.decompressor, envoy.filters.http.dynamic_forward_proxy, envoy.filters.http.dynamo, envoy.filters.http.ext_authz, envoy.filters.http.ext_proc, envoy.filters.http.fault, envoy.filters.http.grpc_http1_bridge, envoy.filters.http.grpc_http1_reverse_bridge, envoy.filters.http.grpc_json_transcoder, envoy.filters.http.grpc_stats, envoy.filters.http.grpc_web, envoy.filters.http.header_to_metadata, envoy.filters.http.health_check, envoy.filters.http.ip_tagging, envoy.filters.http.jwt_authn, envoy.filters.http.local_ratelimit, envoy.filters.http.lua, envoy.filters.http.oauth2, envoy.filters.http.on_demand, envoy.filters.http.original_src, envoy.filters.http.ratelimit, envoy.filters.http.rbac, envoy.filters.http.router, envoy.filters.http.set_metadata, envoy.filters.http.squash, envoy.filters.http.tap, envoy.filters.http.wasm, envoy.grpc_http1_bridge, envoy.grpc_json_transcoder, envoy.grpc_web, envoy.health_check, envoy.http_dynamo_filter, envoy.ip_tagging, envoy.local_rate_limit, envoy.lua, envoy.rate_limit, envoy.router, envoy.squash, match-wrapper
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.tracers: envoy.dynamic.ot, envoy.lightstep, envoy.tracers.datadog, envoy.tracers.dynamic_ot, envoy.tracers.lightstep, envoy.tracers.opencensus, envoy.tracers.skywalking, envoy.tracers.xray, envoy.tracers.zipkin, envoy.zipkin
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.dubbo_proxy.route_matchers: default
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.retry_priorities: envoy.retry_priorities.previous_priorities
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.http.cache: envoy.extensions.http.cache.simple
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.matching.action: composite-action, skip
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.internal_redirect_predicates: envoy.internal_redirect_predicates.allow_listed_routes, envoy.internal_redirect_predicates.previous_routes, envoy.internal_redirect_predicates.safe_cross_scheme
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.guarddog_actions: envoy.watchdog.abort_action, envoy.watchdog.profile_action
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.formatter: envoy.formatter.req_without_query
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.stats_sinks: envoy.dog_statsd, envoy.graphite_statsd, envoy.metrics_service, envoy.stat_sinks.dog_statsd, envoy.stat_sinks.graphite_statsd, envoy.stat_sinks.hystrix, envoy.stat_sinks.metrics_service, envoy.stat_sinks.statsd, envoy.stat_sinks.wasm, envoy.statsd
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.thrift_proxy.filters: envoy.filters.thrift.rate_limit, envoy.filters.thrift.router
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.matching.http.input: request-headers, request-trailers, response-headers, response-trailers
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.health_checkers: envoy.health_checkers.redis
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.resolvers: envoy.ip
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.dubbo_proxy.protocols: dubbo
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.bootstrap: envoy.bootstrap.wasm, envoy.extensions.network.socket_interface.default_socket_interface
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.retry_host_predicates: envoy.retry_host_predicates.omit_canary_hosts, envoy.retry_host_predicates.omit_host_metadata, envoy.retry_host_predicates.previous_hosts
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.thrift_proxy.transports: auto, framed, header, unframed
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.transport_sockets.downstream: envoy.transport_sockets.alts, envoy.transport_sockets.quic, envoy.transport_sockets.raw_buffer, envoy.transport_sockets.starttls, envoy.transport_sockets.tap, envoy.transport_sockets.tls, raw_buffer, starttls, tls
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.quic.server.crypto_stream: envoy.quic.crypto_stream.server.quiche
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.wasm.runtime: envoy.wasm.runtime.null, envoy.wasm.runtime.v8
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.rate_limit_descriptors: envoy.rate_limit_descriptors.expr
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.access_loggers: envoy.access_loggers.file, envoy.access_loggers.http_grpc, envoy.access_loggers.open_telemetry, envoy.access_loggers.stderr, envoy.access_loggers.stdout, envoy.access_loggers.tcp_grpc, envoy.access_loggers.wasm, envoy.file_access_log, envoy.http_grpc_access_log, envoy.open_telemetry_access_log, envoy.stderr_access_log, envoy.stdout_access_log, envoy.tcp_grpc_access_log, envoy.wasm_access_log
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.compression.compressor: envoy.compression.brotli.compressor, envoy.compression.gzip.compressor
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.request_id: envoy.request_id.uuid
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.dubbo_proxy.filters: envoy.filters.dubbo.router
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.quic.proof_source: envoy.quic.proof_source.filter_chain
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.http.original_ip_detection: envoy.http.original_ip_detection.custom_header, envoy.http.original_ip_detection.xff
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.matching.common_inputs: envoy.matching.common_inputs.environment_variable
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.http.stateful_header_formatters: preserve_case
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.compression.decompressor: envoy.compression.brotli.decompressor, envoy.compression.gzip.decompressor
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.matching.input_matchers: envoy.matching.matchers.consistent_hashing, envoy.matching.matchers.ip
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.resource_monitors: envoy.resource_monitors.fixed_heap, envoy.resource_monitors.injected_resource
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.dubbo_proxy.serializers: dubbo.hessian2
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.upstream_options: envoy.extensions.upstreams.http.v3.HttpProtocolOptions, envoy.upstreams.http.http_protocol_options
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.tls.cert_validator: envoy.tls.cert_validator.default, envoy.tls.cert_validator.spiffe
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.grpc_credentials: envoy.grpc_credentials.aws_iam, envoy.grpc_credentials.default, envoy.grpc_credentials.file_based_metadata
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.filters.network: envoy.client_ssl_auth, envoy.echo, envoy.ext_authz, envoy.filters.network.client_ssl_auth, envoy.filters.network.connection_limit, envoy.filters.network.direct_response, envoy.filters.network.dubbo_proxy, envoy.filters.network.echo, envoy.filters.network.ext_authz, envoy.filters.network.http_connection_manager, envoy.filters.network.kafka_broker, envoy.filters.network.local_ratelimit, envoy.filters.network.mongo_proxy, envoy.filters.network.mysql_proxy, envoy.filters.network.postgres_proxy, envoy.filters.network.ratelimit, envoy.filters.network.rbac, envoy.filters.network.redis_proxy, envoy.filters.network.rocketmq_proxy, envoy.filters.network.sni_cluster, envoy.filters.network.sni_dynamic_forward_proxy, envoy.filters.network.tcp_proxy, envoy.filters.network.thrift_proxy, envoy.filters.network.wasm, envoy.filters.network.zookeeper_proxy, envoy.http_connection_manager, envoy.mongo_proxy, envoy.ratelimit, envoy.redis_proxy, envoy.tcp_proxy
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.transport_sockets.upstream: envoy.transport_sockets.alts, envoy.transport_sockets.quic, envoy.transport_sockets.raw_buffer, envoy.transport_sockets.starttls, envoy.transport_sockets.tap, envoy.transport_sockets.tls, envoy.transport_sockets.upstream_proxy_protocol, raw_buffer, starttls, tls
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.upstreams: envoy.filters.connection_pools.tcp.generic
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:342] envoy.thrift_proxy.protocols: auto, binary, binary/non-strict, compact, twitter
[2025-04-19 11:09:34.148][1][critical][main] [source/server/server.cc:112] error initializing configuration '': At least one of --config-path or --config-yaml or Options::configProto() should be non-empty
[2025-04-19 11:09:34.148][1][info][main] [source/server/server.cc:855] exiting
At least one of --config-path or --config-yaml or Options::configProto() should be non-empty
cat ch3/simple.yaml
✅ 출력
admin:
address:
socket_address: { address: 0.0.0.0, port_value: 15000 }
static_resources:
listeners:
- name: httpbin-demo
address:
socket_address: { address: 0.0.0.0, port_value: 15001 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
http_filters:
- name: envoy.filters.http.router
route_config:
name: httpbin_local_route
virtual_hosts:
- name: httpbin_local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route:
auto_host_rewrite: true
cluster: httpbin_service
clusters:
- name: httpbin_service
connect_timeout: 5s
type: LOGICAL_DNS
dns_lookup_family: V4_ONLY
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: httpbin
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: httpbin
port_value: 8000
docker run --name proxy --link httpbin envoyproxy/envoy:v1.19.0 --config-yaml "$(cat ch3/simple.yaml)"
✅ 출력
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:338] initializing epoch 0 (base id=0, hot restart version=11.104)
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:340] statically linked extensions:
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.thrift_proxy.transports: auto, framed, header, unframed
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.http.stateful_header_formatters: preserve_case
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.thrift_proxy.filters: envoy.filters.thrift.rate_limit, envoy.filters.thrift.router
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.filters.network: envoy.client_ssl_auth, envoy.echo, envoy.ext_authz, envoy.filters.network.client_ssl_auth, envoy.filters.network.connection_limit, envoy.filters.network.direct_response, envoy.filters.network.dubbo_proxy, envoy.filters.network.echo, envoy.filters.network.ext_authz, envoy.filters.network.http_connection_manager, envoy.filters.network.kafka_broker, envoy.filters.network.local_ratelimit, envoy.filters.network.mongo_proxy, envoy.filters.network.mysql_proxy, envoy.filters.network.postgres_proxy, envoy.filters.network.ratelimit, envoy.filters.network.rbac, envoy.filters.network.redis_proxy, envoy.filters.network.rocketmq_proxy, envoy.filters.network.sni_cluster, envoy.filters.network.sni_dynamic_forward_proxy, envoy.filters.network.tcp_proxy, envoy.filters.network.thrift_proxy, envoy.filters.network.wasm, envoy.filters.network.zookeeper_proxy, envoy.http_connection_manager, envoy.mongo_proxy, envoy.ratelimit, envoy.redis_proxy, envoy.tcp_proxy
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.health_checkers: envoy.health_checkers.redis
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.transport_sockets.upstream: envoy.transport_sockets.alts, envoy.transport_sockets.quic, envoy.transport_sockets.raw_buffer, envoy.transport_sockets.starttls, envoy.transport_sockets.tap, envoy.transport_sockets.tls, envoy.transport_sockets.upstream_proxy_protocol, raw_buffer, starttls, tls
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.filters.http: envoy.bandwidth_limit, envoy.buffer, envoy.cors, envoy.csrf, envoy.ext_authz, envoy.ext_proc, envoy.fault, envoy.filters.http.adaptive_concurrency, envoy.filters.http.admission_control, envoy.filters.http.alternate_protocols_cache, envoy.filters.http.aws_lambda, envoy.filters.http.aws_request_signing, envoy.filters.http.bandwidth_limit, envoy.filters.http.buffer, envoy.filters.http.cache, envoy.filters.http.cdn_loop, envoy.filters.http.composite, envoy.filters.http.compressor, envoy.filters.http.cors, envoy.filters.http.csrf, envoy.filters.http.decompressor, envoy.filters.http.dynamic_forward_proxy, envoy.filters.http.dynamo, envoy.filters.http.ext_authz, envoy.filters.http.ext_proc, envoy.filters.http.fault, envoy.filters.http.grpc_http1_bridge, envoy.filters.http.grpc_http1_reverse_bridge, envoy.filters.http.grpc_json_transcoder, envoy.filters.http.grpc_stats, envoy.filters.http.grpc_web, envoy.filters.http.header_to_metadata, envoy.filters.http.health_check, envoy.filters.http.ip_tagging, envoy.filters.http.jwt_authn, envoy.filters.http.local_ratelimit, envoy.filters.http.lua, envoy.filters.http.oauth2, envoy.filters.http.on_demand, envoy.filters.http.original_src, envoy.filters.http.ratelimit, envoy.filters.http.rbac, envoy.filters.http.router, envoy.filters.http.set_metadata, envoy.filters.http.squash, envoy.filters.http.tap, envoy.filters.http.wasm, envoy.grpc_http1_bridge, envoy.grpc_json_transcoder, envoy.grpc_web, envoy.health_check, envoy.http_dynamo_filter, envoy.ip_tagging, envoy.local_rate_limit, envoy.lua, envoy.rate_limit, envoy.router, envoy.squash, match-wrapper
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.matching.http.input: request-headers, request-trailers, response-headers, response-trailers
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.thrift_proxy.protocols: auto, binary, binary/non-strict, compact, twitter
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.quic.server.crypto_stream: envoy.quic.crypto_stream.server.quiche
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.upstream_options: envoy.extensions.upstreams.http.v3.HttpProtocolOptions, envoy.upstreams.http.http_protocol_options
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.request_id: envoy.request_id.uuid
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.retry_host_predicates: envoy.retry_host_predicates.omit_canary_hosts, envoy.retry_host_predicates.omit_host_metadata, envoy.retry_host_predicates.previous_hosts
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.compression.compressor: envoy.compression.brotli.compressor, envoy.compression.gzip.compressor
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.resource_monitors: envoy.resource_monitors.fixed_heap, envoy.resource_monitors.injected_resource
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.dubbo_proxy.route_matchers: default
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.rate_limit_descriptors: envoy.rate_limit_descriptors.expr
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.filters.listener: envoy.filters.listener.http_inspector, envoy.filters.listener.original_dst, envoy.filters.listener.original_src, envoy.filters.listener.proxy_protocol, envoy.filters.listener.tls_inspector, envoy.listener.http_inspector, envoy.listener.original_dst, envoy.listener.original_src, envoy.listener.proxy_protocol, envoy.listener.tls_inspector
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.filters.udp_listener: envoy.filters.udp.dns_filter, envoy.filters.udp_listener.udp_proxy
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.tls.cert_validator: envoy.tls.cert_validator.default, envoy.tls.cert_validator.spiffe
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.http.original_ip_detection: envoy.http.original_ip_detection.custom_header, envoy.http.original_ip_detection.xff
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.dubbo_proxy.filters: envoy.filters.dubbo.router
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.matching.action: composite-action, skip
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.stats_sinks: envoy.dog_statsd, envoy.graphite_statsd, envoy.metrics_service, envoy.stat_sinks.dog_statsd, envoy.stat_sinks.graphite_statsd, envoy.stat_sinks.hystrix, envoy.stat_sinks.metrics_service, envoy.stat_sinks.statsd, envoy.stat_sinks.wasm, envoy.statsd
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.formatter: envoy.formatter.req_without_query
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.upstreams: envoy.filters.connection_pools.tcp.generic
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.dubbo_proxy.serializers: dubbo.hessian2
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.transport_sockets.downstream: envoy.transport_sockets.alts, envoy.transport_sockets.quic, envoy.transport_sockets.raw_buffer, envoy.transport_sockets.starttls, envoy.transport_sockets.tap, envoy.transport_sockets.tls, raw_buffer, starttls, tls
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.tracers: envoy.dynamic.ot, envoy.lightstep, envoy.tracers.datadog, envoy.tracers.dynamic_ot, envoy.tracers.lightstep, envoy.tracers.opencensus, envoy.tracers.skywalking, envoy.tracers.xray, envoy.tracers.zipkin, envoy.zipkin
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.guarddog_actions: envoy.watchdog.abort_action, envoy.watchdog.profile_action
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.resolvers: envoy.ip
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.bootstrap: envoy.bootstrap.wasm, envoy.extensions.network.socket_interface.default_socket_interface
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.access_loggers: envoy.access_loggers.file, envoy.access_loggers.http_grpc, envoy.access_loggers.open_telemetry, envoy.access_loggers.stderr, envoy.access_loggers.stdout, envoy.access_loggers.tcp_grpc, envoy.access_loggers.wasm, envoy.file_access_log, envoy.http_grpc_access_log, envoy.open_telemetry_access_log, envoy.stderr_access_log, envoy.stdout_access_log, envoy.tcp_grpc_access_log, envoy.wasm_access_log
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.wasm.runtime: envoy.wasm.runtime.null, envoy.wasm.runtime.v8
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.clusters: envoy.cluster.eds, envoy.cluster.logical_dns, envoy.cluster.original_dst, envoy.cluster.static, envoy.cluster.strict_dns, envoy.clusters.aggregate, envoy.clusters.dynamic_forward_proxy, envoy.clusters.redis
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.dubbo_proxy.protocols: dubbo
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.internal_redirect_predicates: envoy.internal_redirect_predicates.allow_listed_routes, envoy.internal_redirect_predicates.previous_routes, envoy.internal_redirect_predicates.safe_cross_scheme
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.matching.input_matchers: envoy.matching.matchers.consistent_hashing, envoy.matching.matchers.ip
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.retry_priorities: envoy.retry_priorities.previous_priorities
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.http.cache: envoy.extensions.http.cache.simple
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.compression.decompressor: envoy.compression.brotli.decompressor, envoy.compression.gzip.decompressor
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.grpc_credentials: envoy.grpc_credentials.aws_iam, envoy.grpc_credentials.default, envoy.grpc_credentials.file_based_metadata
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.matching.common_inputs: envoy.matching.common_inputs.environment_variable
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.quic.proof_source: envoy.quic.proof_source.filter_chain
[2025-04-19 11:17:53.161][1][info][main] [source/server/server.cc:358] HTTP header map info:
[2025-04-19 11:17:53.162][1][info][main] [source/server/server.cc:361] request header map: 632 bytes: :authority,:method,:path,:protocol,:scheme,accept,accept-encoding,access-control-request-method,authentication,authorization,cache-control,cdn-loop,connection,content-encoding,content-length,content-type,expect,grpc-accept-encoding,grpc-timeout,if-match,if-modified-since,if-none-match,if-range,if-unmodified-since,keep-alive,origin,pragma,proxy-connection,referer,te,transfer-encoding,upgrade,user-agent,via,x-client-trace-id,x-envoy-attempt-count,x-envoy-decorator-operation,x-envoy-downstream-service-cluster,x-envoy-downstream-service-node,x-envoy-expected-rq-timeout-ms,x-envoy-external-address,x-envoy-force-trace,x-envoy-hedge-on-per-try-timeout,x-envoy-internal,x-envoy-ip-tags,x-envoy-max-retries,x-envoy-original-path,x-envoy-original-url,x-envoy-retriable-header-names,x-envoy-retriable-status-codes,x-envoy-retry-grpc-on,x-envoy-retry-on,x-envoy-upstream-alt-stat-name,x-envoy-upstream-rq-per-try-timeout-ms,x-envoy-upstream-rq-timeout-alt-response,x-envoy-upstream-rq-timeout-ms,x-forwarded-client-cert,x-forwarded-for,x-forwarded-proto,x-ot-span-context,x-request-id
[2025-04-19 11:17:53.162][1][info][main] [source/server/server.cc:361] request trailer map: 136 bytes:
[2025-04-19 11:17:53.162][1][info][main] [source/server/server.cc:361] response header map: 432 bytes: :status,access-control-allow-credentials,access-control-allow-headers,access-control-allow-methods,access-control-allow-origin,access-control-expose-headers,access-control-max-age,age,cache-control,connection,content-encoding,content-length,content-type,date,etag,expires,grpc-message,grpc-status,keep-alive,last-modified,location,proxy-connection,server,transfer-encoding,upgrade,vary,via,x-envoy-attempt-count,x-envoy-decorator-operation,x-envoy-degraded,x-envoy-immediate-health-check-fail,x-envoy-ratelimited,x-envoy-upstream-canary,x-envoy-upstream-healthchecked-cluster,x-envoy-upstream-service-time,x-request-id
[2025-04-19 11:17:53.162][1][info][main] [source/server/server.cc:361] response trailer map: 160 bytes: grpc-message,grpc-status
[2025-04-19 11:17:53.179][1][info][admin] [source/server/admin/admin.cc:132] admin address: 0.0.0.0:15000
[2025-04-19 11:17:53.179][1][info][main] [source/server/server.cc:707] runtime: {}
[2025-04-19 11:17:53.179][1][info][config] [source/server/configuration_impl.cc:127] loading tracing configuration
[2025-04-19 11:17:53.179][1][info][config] [source/server/configuration_impl.cc:87] loading 0 static secret(s)
[2025-04-19 11:17:53.179][1][info][config] [source/server/configuration_impl.cc:93] loading 1 cluster(s)
[2025-04-19 11:17:53.180][1][info][config] [source/server/configuration_impl.cc:97] loading 1 listener(s)
[2025-04-19 11:17:53.181][1][info][config] [source/server/configuration_impl.cc:109] loading stats configuration
[2025-04-19 11:17:53.181][1][info][runtime] [source/common/runtime/runtime_impl.cc:449] RTDS has finished initialization
[2025-04-19 11:17:53.181][1][info][upstream] [source/common/upstream/cluster_manager_impl.cc:206] cm init: all clusters initialized
[2025-04-19 11:17:53.181][1][warning][main] [source/server/server.cc:682] there is no configured limit to the number of allowed active connections. Set a limit via the runtime key overload.global_downstream_max_connections
[2025-04-19 11:17:53.182][1][info][main] [source/server/server.cc:785] all clusters initialized. initializing init manager
[2025-04-19 11:17:53.182][1][info][config] [source/server/listener_manager_impl.cc:834] all dependencies initialized. starting workers
[2025-04-19 11:17:53.183][1][info][main] [source/server/server.cc:804] starting main dispatch loop
docker logs proxy
✅ 출력
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:338] initializing epoch 0 (base id=0, hot restart version=11.104)
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:340] statically linked extensions:
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.thrift_proxy.transports: auto, framed, header, unframed
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.http.stateful_header_formatters: preserve_case
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.thrift_proxy.filters: envoy.filters.thrift.rate_limit, envoy.filters.thrift.router
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.filters.network: envoy.client_ssl_auth, envoy.echo, envoy.ext_authz, envoy.filters.network.client_ssl_auth, envoy.filters.network.connection_limit, envoy.filters.network.direct_response, envoy.filters.network.dubbo_proxy, envoy.filters.network.echo, envoy.filters.network.ext_authz, envoy.filters.network.http_connection_manager, envoy.filters.network.kafka_broker, envoy.filters.network.local_ratelimit, envoy.filters.network.mongo_proxy, envoy.filters.network.mysql_proxy, envoy.filters.network.postgres_proxy, envoy.filters.network.ratelimit, envoy.filters.network.rbac, envoy.filters.network.redis_proxy, envoy.filters.network.rocketmq_proxy, envoy.filters.network.sni_cluster, envoy.filters.network.sni_dynamic_forward_proxy, envoy.filters.network.tcp_proxy, envoy.filters.network.thrift_proxy, envoy.filters.network.wasm, envoy.filters.network.zookeeper_proxy, envoy.http_connection_manager, envoy.mongo_proxy, envoy.ratelimit, envoy.redis_proxy, envoy.tcp_proxy
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.health_checkers: envoy.health_checkers.redis
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.transport_sockets.upstream: envoy.transport_sockets.alts, envoy.transport_sockets.quic, envoy.transport_sockets.raw_buffer, envoy.transport_sockets.starttls, envoy.transport_sockets.tap, envoy.transport_sockets.tls, envoy.transport_sockets.upstream_proxy_protocol, raw_buffer, starttls, tls
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.filters.http: envoy.bandwidth_limit, envoy.buffer, envoy.cors, envoy.csrf, envoy.ext_authz, envoy.ext_proc, envoy.fault, envoy.filters.http.adaptive_concurrency, envoy.filters.http.admission_control, envoy.filters.http.alternate_protocols_cache, envoy.filters.http.aws_lambda, envoy.filters.http.aws_request_signing, envoy.filters.http.bandwidth_limit, envoy.filters.http.buffer, envoy.filters.http.cache, envoy.filters.http.cdn_loop, envoy.filters.http.composite, envoy.filters.http.compressor, envoy.filters.http.cors, envoy.filters.http.csrf, envoy.filters.http.decompressor, envoy.filters.http.dynamic_forward_proxy, envoy.filters.http.dynamo, envoy.filters.http.ext_authz, envoy.filters.http.ext_proc, envoy.filters.http.fault, envoy.filters.http.grpc_http1_bridge, envoy.filters.http.grpc_http1_reverse_bridge, envoy.filters.http.grpc_json_transcoder, envoy.filters.http.grpc_stats, envoy.filters.http.grpc_web, envoy.filters.http.header_to_metadata, envoy.filters.http.health_check, envoy.filters.http.ip_tagging, envoy.filters.http.jwt_authn, envoy.filters.http.local_ratelimit, envoy.filters.http.lua, envoy.filters.http.oauth2, envoy.filters.http.on_demand, envoy.filters.http.original_src, envoy.filters.http.ratelimit, envoy.filters.http.rbac, envoy.filters.http.router, envoy.filters.http.set_metadata, envoy.filters.http.squash, envoy.filters.http.tap, envoy.filters.http.wasm, envoy.grpc_http1_bridge, envoy.grpc_json_transcoder, envoy.grpc_web, envoy.health_check, envoy.http_dynamo_filter, envoy.ip_tagging, envoy.local_rate_limit, envoy.lua, envoy.rate_limit, envoy.router, envoy.squash, match-wrapper
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.matching.http.input: request-headers, request-trailers, response-headers, response-trailers
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.thrift_proxy.protocols: auto, binary, binary/non-strict, compact, twitter
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.quic.server.crypto_stream: envoy.quic.crypto_stream.server.quiche
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.upstream_options: envoy.extensions.upstreams.http.v3.HttpProtocolOptions, envoy.upstreams.http.http_protocol_options
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.request_id: envoy.request_id.uuid
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.retry_host_predicates: envoy.retry_host_predicates.omit_canary_hosts, envoy.retry_host_predicates.omit_host_metadata, envoy.retry_host_predicates.previous_hosts
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.compression.compressor: envoy.compression.brotli.compressor, envoy.compression.gzip.compressor
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.resource_monitors: envoy.resource_monitors.fixed_heap, envoy.resource_monitors.injected_resource
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.dubbo_proxy.route_matchers: default
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.rate_limit_descriptors: envoy.rate_limit_descriptors.expr
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.filters.listener: envoy.filters.listener.http_inspector, envoy.filters.listener.original_dst, envoy.filters.listener.original_src, envoy.filters.listener.proxy_protocol, envoy.filters.listener.tls_inspector, envoy.listener.http_inspector, envoy.listener.original_dst, envoy.listener.original_src, envoy.listener.proxy_protocol, envoy.listener.tls_inspector
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.filters.udp_listener: envoy.filters.udp.dns_filter, envoy.filters.udp_listener.udp_proxy
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.tls.cert_validator: envoy.tls.cert_validator.default, envoy.tls.cert_validator.spiffe
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.http.original_ip_detection: envoy.http.original_ip_detection.custom_header, envoy.http.original_ip_detection.xff
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.dubbo_proxy.filters: envoy.filters.dubbo.router
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.matching.action: composite-action, skip
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.stats_sinks: envoy.dog_statsd, envoy.graphite_statsd, envoy.metrics_service, envoy.stat_sinks.dog_statsd, envoy.stat_sinks.graphite_statsd, envoy.stat_sinks.hystrix, envoy.stat_sinks.metrics_service, envoy.stat_sinks.statsd, envoy.stat_sinks.wasm, envoy.statsd
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.formatter: envoy.formatter.req_without_query
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.upstreams: envoy.filters.connection_pools.tcp.generic
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.dubbo_proxy.serializers: dubbo.hessian2
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.transport_sockets.downstream: envoy.transport_sockets.alts, envoy.transport_sockets.quic, envoy.transport_sockets.raw_buffer, envoy.transport_sockets.starttls, envoy.transport_sockets.tap, envoy.transport_sockets.tls, raw_buffer, starttls, tls
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.tracers: envoy.dynamic.ot, envoy.lightstep, envoy.tracers.datadog, envoy.tracers.dynamic_ot, envoy.tracers.lightstep, envoy.tracers.opencensus, envoy.tracers.skywalking, envoy.tracers.xray, envoy.tracers.zipkin, envoy.zipkin
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.guarddog_actions: envoy.watchdog.abort_action, envoy.watchdog.profile_action
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.resolvers: envoy.ip
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.bootstrap: envoy.bootstrap.wasm, envoy.extensions.network.socket_interface.default_socket_interface
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.access_loggers: envoy.access_loggers.file, envoy.access_loggers.http_grpc, envoy.access_loggers.open_telemetry, envoy.access_loggers.stderr, envoy.access_loggers.stdout, envoy.access_loggers.tcp_grpc, envoy.access_loggers.wasm, envoy.file_access_log, envoy.http_grpc_access_log, envoy.open_telemetry_access_log, envoy.stderr_access_log, envoy.stdout_access_log, envoy.tcp_grpc_access_log, envoy.wasm_access_log
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.wasm.runtime: envoy.wasm.runtime.null, envoy.wasm.runtime.v8
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.clusters: envoy.cluster.eds, envoy.cluster.logical_dns, envoy.cluster.original_dst, envoy.cluster.static, envoy.cluster.strict_dns, envoy.clusters.aggregate, envoy.clusters.dynamic_forward_proxy, envoy.clusters.redis
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.dubbo_proxy.protocols: dubbo
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.internal_redirect_predicates: envoy.internal_redirect_predicates.allow_listed_routes, envoy.internal_redirect_predicates.previous_routes, envoy.internal_redirect_predicates.safe_cross_scheme
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.matching.input_matchers: envoy.matching.matchers.consistent_hashing, envoy.matching.matchers.ip
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.retry_priorities: envoy.retry_priorities.previous_priorities
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.http.cache: envoy.extensions.http.cache.simple
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.compression.decompressor: envoy.compression.brotli.decompressor, envoy.compression.gzip.decompressor
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.grpc_credentials: envoy.grpc_credentials.aws_iam, envoy.grpc_credentials.default, envoy.grpc_credentials.file_based_metadata
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.matching.common_inputs: envoy.matching.common_inputs.environment_variable
[2025-04-19 11:17:53.156][1][info][main] [source/server/server.cc:342] envoy.quic.proof_source: envoy.quic.proof_source.filter_chain
[2025-04-19 11:17:53.161][1][info][main] [source/server/server.cc:358] HTTP header map info:
[2025-04-19 11:17:53.162][1][info][main] [source/server/server.cc:361] request header map: 632 bytes: :authority,:method,:path,:protocol,:scheme,accept,accept-encoding,access-control-request-method,authentication,authorization,cache-control,cdn-loop,connection,content-encoding,content-length,content-type,expect,grpc-accept-encoding,grpc-timeout,if-match,if-modified-since,if-none-match,if-range,if-unmodified-since,keep-alive,origin,pragma,proxy-connection,referer,te,transfer-encoding,upgrade,user-agent,via,x-client-trace-id,x-envoy-attempt-count,x-envoy-decorator-operation,x-envoy-downstream-service-cluster,x-envoy-downstream-service-node,x-envoy-expected-rq-timeout-ms,x-envoy-external-address,x-envoy-force-trace,x-envoy-hedge-on-per-try-timeout,x-envoy-internal,x-envoy-ip-tags,x-envoy-max-retries,x-envoy-original-path,x-envoy-original-url,x-envoy-retriable-header-names,x-envoy-retriable-status-codes,x-envoy-retry-grpc-on,x-envoy-retry-on,x-envoy-upstream-alt-stat-name,x-envoy-upstream-rq-per-try-timeout-ms,x-envoy-upstream-rq-timeout-alt-response,x-envoy-upstream-rq-timeout-ms,x-forwarded-client-cert,x-forwarded-for,x-forwarded-proto,x-ot-span-context,x-request-id
[2025-04-19 11:17:53.162][1][info][main] [source/server/server.cc:361] request trailer map: 136 bytes:
[2025-04-19 11:17:53.162][1][info][main] [source/server/server.cc:361] response header map: 432 bytes: :status,access-control-allow-credentials,access-control-allow-headers,access-control-allow-methods,access-control-allow-origin,access-control-expose-headers,access-control-max-age,age,cache-control,connection,content-encoding,content-length,content-type,date,etag,expires,grpc-message,grpc-status,keep-alive,last-modified,location,proxy-connection,server,transfer-encoding,upgrade,vary,via,x-envoy-attempt-count,x-envoy-decorator-operation,x-envoy-degraded,x-envoy-immediate-health-check-fail,x-envoy-ratelimited,x-envoy-upstream-canary,x-envoy-upstream-healthchecked-cluster,x-envoy-upstream-service-time,x-request-id
[2025-04-19 11:17:53.162][1][info][main] [source/server/server.cc:361] response trailer map: 160 bytes: grpc-message,grpc-status
[2025-04-19 11:17:53.179][1][info][admin] [source/server/admin/admin.cc:132] admin address: 0.0.0.0:15000
[2025-04-19 11:17:53.179][1][info][main] [source/server/server.cc:707] runtime: {}
[2025-04-19 11:17:53.179][1][info][config] [source/server/configuration_impl.cc:127] loading tracing configuration
[2025-04-19 11:17:53.179][1][info][config] [source/server/configuration_impl.cc:87] loading 0 static secret(s)
[2025-04-19 11:17:53.179][1][info][config] [source/server/configuration_impl.cc:93] loading 1 cluster(s)
[2025-04-19 11:17:53.180][1][info][config] [source/server/configuration_impl.cc:97] loading 1 listener(s)
[2025-04-19 11:17:53.181][1][info][config] [source/server/configuration_impl.cc:109] loading stats configuration
[2025-04-19 11:17:53.181][1][info][runtime] [source/common/runtime/runtime_impl.cc:449] RTDS has finished initialization
[2025-04-19 11:17:53.181][1][info][upstream] [source/common/upstream/cluster_manager_impl.cc:206] cm init: all clusters initialized
[2025-04-19 11:17:53.181][1][warning][main] [source/server/server.cc:682] there is no configured limit to the number of allowed active connections. Set a limit via the runtime key overload.global_downstream_max_connections
[2025-04-19 11:17:53.182][1][info][main] [source/server/server.cc:785] all clusters initialized. initializing init manager
[2025-04-19 11:17:53.182][1][info][config] [source/server/listener_manager_impl.cc:834] all dependencies initialized. starting workers
[2025-04-19 11:17:53.183][1][info][main] [source/server/server.cc:804] starting main dispatch loop
curl 컨테이너를 이용해 Envoy(15001)로 요청을 보내 httpbin 서비스로 라우팅됨을 확인
docker run -it --rm --link proxy curlimages/curl curl -X GET http://proxy:15001/headers
✅ 출력
{
"headers": {
"Accept": [
"*/*"
],
"Host": [
"httpbin"
],
"User-Agent": [
"curl/8.13.0"
],
"X-Envoy-Expected-Rq-Timeout-Ms": [
"15000"
],
"X-Forwarded-Proto": [
"http"
],
"X-Request-Id": [
"9829b5f3-038c-40d0-ac6b-0f856581a5a5"
]
}
}
docker rm -f proxy
# 결과
proxy
라우팅 규칙에 timeout: 1s 추가
cat ch3/simple_change_timeout.yaml
✅ 출력
admin:
address:
socket_address: { address: 0.0.0.0, port_value: 15000 }
static_resources:
listeners:
- name: httpbin-demo
address:
socket_address: { address: 0.0.0.0, port_value: 15001 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
http_filters:
- name: envoy.filters.http.router
route_config:
name: httpbin_local_route
virtual_hosts:
- name: httpbin_local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route:
auto_host_rewrite: true
cluster: httpbin_service
timeout: 1s # 타임아웃 1초
clusters:
- name: httpbin_service
connect_timeout: 5s
type: LOGICAL_DNS
dns_lookup_family: V4_ONLY
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: httpbin
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: httpbin
port_value: 8000
docker run --name proxy --link httpbin envoyproxy/envoy:v1.19.0 --config-yaml "$(cat ch3/simple_change_timeout.yaml)"
✅ 출력
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:338] initializing epoch 0 (base id=0, hot restart version=11.104)
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:340] statically linked extensions:
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.resolvers: envoy.ip
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.thrift_proxy.protocols: auto, binary, binary/non-strict, compact, twitter
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.upstreams: envoy.filters.connection_pools.tcp.generic
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.compression.decompressor: envoy.compression.brotli.decompressor, envoy.compression.gzip.decompressor
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.health_checkers: envoy.health_checkers.redis
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.matching.input_matchers: envoy.matching.matchers.consistent_hashing, envoy.matching.matchers.ip
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.filters.udp_listener: envoy.filters.udp.dns_filter, envoy.filters.udp_listener.udp_proxy
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.guarddog_actions: envoy.watchdog.abort_action, envoy.watchdog.profile_action
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.request_id: envoy.request_id.uuid
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.matching.common_inputs: envoy.matching.common_inputs.environment_variable
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.grpc_credentials: envoy.grpc_credentials.aws_iam, envoy.grpc_credentials.default, envoy.grpc_credentials.file_based_metadata
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.rate_limit_descriptors: envoy.rate_limit_descriptors.expr
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.upstream_options: envoy.extensions.upstreams.http.v3.HttpProtocolOptions, envoy.upstreams.http.http_protocol_options
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.matching.action: composite-action, skip
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.filters.network: envoy.client_ssl_auth, envoy.echo, envoy.ext_authz, envoy.filters.network.client_ssl_auth, envoy.filters.network.connection_limit, envoy.filters.network.direct_response, envoy.filters.network.dubbo_proxy, envoy.filters.network.echo, envoy.filters.network.ext_authz, envoy.filters.network.http_connection_manager, envoy.filters.network.kafka_broker, envoy.filters.network.local_ratelimit, envoy.filters.network.mongo_proxy, envoy.filters.network.mysql_proxy, envoy.filters.network.postgres_proxy, envoy.filters.network.ratelimit, envoy.filters.network.rbac, envoy.filters.network.redis_proxy, envoy.filters.network.rocketmq_proxy, envoy.filters.network.sni_cluster, envoy.filters.network.sni_dynamic_forward_proxy, envoy.filters.network.tcp_proxy, envoy.filters.network.thrift_proxy, envoy.filters.network.wasm, envoy.filters.network.zookeeper_proxy, envoy.http_connection_manager, envoy.mongo_proxy, envoy.ratelimit, envoy.redis_proxy, envoy.tcp_proxy
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.dubbo_proxy.protocols: dubbo
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.thrift_proxy.transports: auto, framed, header, unframed
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.tracers: envoy.dynamic.ot, envoy.lightstep, envoy.tracers.datadog, envoy.tracers.dynamic_ot, envoy.tracers.lightstep, envoy.tracers.opencensus, envoy.tracers.skywalking, envoy.tracers.xray, envoy.tracers.zipkin, envoy.zipkin
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.wasm.runtime: envoy.wasm.runtime.null, envoy.wasm.runtime.v8
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.http.original_ip_detection: envoy.http.original_ip_detection.custom_header, envoy.http.original_ip_detection.xff
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.resource_monitors: envoy.resource_monitors.fixed_heap, envoy.resource_monitors.injected_resource
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.thrift_proxy.filters: envoy.filters.thrift.rate_limit, envoy.filters.thrift.router
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.transport_sockets.upstream: envoy.transport_sockets.alts, envoy.transport_sockets.quic, envoy.transport_sockets.raw_buffer, envoy.transport_sockets.starttls, envoy.transport_sockets.tap, envoy.transport_sockets.tls, envoy.transport_sockets.upstream_proxy_protocol, raw_buffer, starttls, tls
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.stats_sinks: envoy.dog_statsd, envoy.graphite_statsd, envoy.metrics_service, envoy.stat_sinks.dog_statsd, envoy.stat_sinks.graphite_statsd, envoy.stat_sinks.hystrix, envoy.stat_sinks.metrics_service, envoy.stat_sinks.statsd, envoy.stat_sinks.wasm, envoy.statsd
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.clusters: envoy.cluster.eds, envoy.cluster.logical_dns, envoy.cluster.original_dst, envoy.cluster.static, envoy.cluster.strict_dns, envoy.clusters.aggregate, envoy.clusters.dynamic_forward_proxy, envoy.clusters.redis
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.retry_priorities: envoy.retry_priorities.previous_priorities
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.http.stateful_header_formatters: preserve_case
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.matching.http.input: request-headers, request-trailers, response-headers, response-trailers
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.quic.server.crypto_stream: envoy.quic.crypto_stream.server.quiche
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.bootstrap: envoy.bootstrap.wasm, envoy.extensions.network.socket_interface.default_socket_interface
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.filters.http: envoy.bandwidth_limit, envoy.buffer, envoy.cors, envoy.csrf, envoy.ext_authz, envoy.ext_proc, envoy.fault, envoy.filters.http.adaptive_concurrency, envoy.filters.http.admission_control, envoy.filters.http.alternate_protocols_cache, envoy.filters.http.aws_lambda, envoy.filters.http.aws_request_signing, envoy.filters.http.bandwidth_limit, envoy.filters.http.buffer, envoy.filters.http.cache, envoy.filters.http.cdn_loop, envoy.filters.http.composite, envoy.filters.http.compressor, envoy.filters.http.cors, envoy.filters.http.csrf, envoy.filters.http.decompressor, envoy.filters.http.dynamic_forward_proxy, envoy.filters.http.dynamo, envoy.filters.http.ext_authz, envoy.filters.http.ext_proc, envoy.filters.http.fault, envoy.filters.http.grpc_http1_bridge, envoy.filters.http.grpc_http1_reverse_bridge, envoy.filters.http.grpc_json_transcoder, envoy.filters.http.grpc_stats, envoy.filters.http.grpc_web, envoy.filters.http.header_to_metadata, envoy.filters.http.health_check, envoy.filters.http.ip_tagging, envoy.filters.http.jwt_authn, envoy.filters.http.local_ratelimit, envoy.filters.http.lua, envoy.filters.http.oauth2, envoy.filters.http.on_demand, envoy.filters.http.original_src, envoy.filters.http.ratelimit, envoy.filters.http.rbac, envoy.filters.http.router, envoy.filters.http.set_metadata, envoy.filters.http.squash, envoy.filters.http.tap, envoy.filters.http.wasm, envoy.grpc_http1_bridge, envoy.grpc_json_transcoder, envoy.grpc_web, envoy.health_check, envoy.http_dynamo_filter, envoy.ip_tagging, envoy.local_rate_limit, envoy.lua, envoy.rate_limit, envoy.router, envoy.squash, match-wrapper
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.compression.compressor: envoy.compression.brotli.compressor, envoy.compression.gzip.compressor
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.dubbo_proxy.filters: envoy.filters.dubbo.router
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.quic.proof_source: envoy.quic.proof_source.filter_chain
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.retry_host_predicates: envoy.retry_host_predicates.omit_canary_hosts, envoy.retry_host_predicates.omit_host_metadata, envoy.retry_host_predicates.previous_hosts
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.filters.listener: envoy.filters.listener.http_inspector, envoy.filters.listener.original_dst, envoy.filters.listener.original_src, envoy.filters.listener.proxy_protocol, envoy.filters.listener.tls_inspector, envoy.listener.http_inspector, envoy.listener.original_dst, envoy.listener.original_src, envoy.listener.proxy_protocol, envoy.listener.tls_inspector
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.internal_redirect_predicates: envoy.internal_redirect_predicates.allow_listed_routes, envoy.internal_redirect_predicates.previous_routes, envoy.internal_redirect_predicates.safe_cross_scheme
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.access_loggers: envoy.access_loggers.file, envoy.access_loggers.http_grpc, envoy.access_loggers.open_telemetry, envoy.access_loggers.stderr, envoy.access_loggers.stdout, envoy.access_loggers.tcp_grpc, envoy.access_loggers.wasm, envoy.file_access_log, envoy.http_grpc_access_log, envoy.open_telemetry_access_log, envoy.stderr_access_log, envoy.stdout_access_log, envoy.tcp_grpc_access_log, envoy.wasm_access_log
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.transport_sockets.downstream: envoy.transport_sockets.alts, envoy.transport_sockets.quic, envoy.transport_sockets.raw_buffer, envoy.transport_sockets.starttls, envoy.transport_sockets.tap, envoy.transport_sockets.tls, raw_buffer, starttls, tls
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.http.cache: envoy.extensions.http.cache.simple
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.dubbo_proxy.serializers: dubbo.hessian2
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.tls.cert_validator: envoy.tls.cert_validator.default, envoy.tls.cert_validator.spiffe
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.dubbo_proxy.route_matchers: default
[2025-04-19 11:37:55.506][1][info][main] [source/server/server.cc:342] envoy.formatter: envoy.formatter.req_without_query
[2025-04-19 11:37:55.510][1][info][main] [source/server/server.cc:358] HTTP header map info:
[2025-04-19 11:37:55.510][1][info][main] [source/server/server.cc:361] request header map: 632 bytes: :authority,:method,:path,:protocol,:scheme,accept,accept-encoding,access-control-request-method,authentication,authorization,cache-control,cdn-loop,connection,content-encoding,content-length,content-type,expect,grpc-accept-encoding,grpc-timeout,if-match,if-modified-since,if-none-match,if-range,if-unmodified-since,keep-alive,origin,pragma,proxy-connection,referer,te,transfer-encoding,upgrade,user-agent,via,x-client-trace-id,x-envoy-attempt-count,x-envoy-decorator-operation,x-envoy-downstream-service-cluster,x-envoy-downstream-service-node,x-envoy-expected-rq-timeout-ms,x-envoy-external-address,x-envoy-force-trace,x-envoy-hedge-on-per-try-timeout,x-envoy-internal,x-envoy-ip-tags,x-envoy-max-retries,x-envoy-original-path,x-envoy-original-url,x-envoy-retriable-header-names,x-envoy-retriable-status-codes,x-envoy-retry-grpc-on,x-envoy-retry-on,x-envoy-upstream-alt-stat-name,x-envoy-upstream-rq-per-try-timeout-ms,x-envoy-upstream-rq-timeout-alt-response,x-envoy-upstream-rq-timeout-ms,x-forwarded-client-cert,x-forwarded-for,x-forwarded-proto,x-ot-span-context,x-request-id
[2025-04-19 11:37:55.510][1][info][main] [source/server/server.cc:361] request trailer map: 136 bytes:
[2025-04-19 11:37:55.510][1][info][main] [source/server/server.cc:361] response header map: 432 bytes: :status,access-control-allow-credentials,access-control-allow-headers,access-control-allow-methods,access-control-allow-origin,access-control-expose-headers,access-control-max-age,age,cache-control,connection,content-encoding,content-length,content-type,date,etag,expires,grpc-message,grpc-status,keep-alive,last-modified,location,proxy-connection,server,transfer-encoding,upgrade,vary,via,x-envoy-attempt-count,x-envoy-decorator-operation,x-envoy-degraded,x-envoy-immediate-health-check-fail,x-envoy-ratelimited,x-envoy-upstream-canary,x-envoy-upstream-healthchecked-cluster,x-envoy-upstream-service-time,x-request-id
[2025-04-19 11:37:55.510][1][info][main] [source/server/server.cc:361] response trailer map: 160 bytes: grpc-message,grpc-status
[2025-04-19 11:37:55.527][1][info][admin] [source/server/admin/admin.cc:132] admin address: 0.0.0.0:15000
[2025-04-19 11:37:55.527][1][info][main] [source/server/server.cc:707] runtime: {}
[2025-04-19 11:37:55.527][1][info][config] [source/server/configuration_impl.cc:127] loading tracing configuration
[2025-04-19 11:37:55.527][1][info][config] [source/server/configuration_impl.cc:87] loading 0 static secret(s)
[2025-04-19 11:37:55.527][1][info][config] [source/server/configuration_impl.cc:93] loading 1 cluster(s)
[2025-04-19 11:37:55.528][1][info][config] [source/server/configuration_impl.cc:97] loading 1 listener(s)
[2025-04-19 11:37:55.529][1][info][config] [source/server/configuration_impl.cc:109] loading stats configuration
[2025-04-19 11:37:55.529][1][info][runtime] [source/common/runtime/runtime_impl.cc:449] RTDS has finished initialization
[2025-04-19 11:37:55.529][1][info][upstream] [source/common/upstream/cluster_manager_impl.cc:206] cm init: all clusters initialized
[2025-04-19 11:37:55.529][1][warning][main] [source/server/server.cc:682] there is no configured limit to the number of allowed active connections. Set a limit via the runtime key overload.global_downstream_max_connections
[2025-04-19 11:37:55.529][1][info][main] [source/server/server.cc:785] all clusters initialized. initializing init manager
[2025-04-19 11:37:55.529][1][info][config] [source/server/listener_manager_impl.cc:834] all dependencies initialized. starting workers
[2025-04-19 11:37:55.531][1][info][main] [source/server/server.cc:804] starting main dispatch loop
docker run -it --rm --link proxy curlimages/curl curl -X GET http://proxy:15001/headers
✅ 출력
{
"headers": {
"Accept": [
"*/*"
],
"Host": [
"httpbin"
],
"User-Agent": [
"curl/8.13.0"
],
"X-Envoy-Expected-Rq-Timeout-Ms": [
"1000" # 1000ms초 = 1초
],
"X-Forwarded-Proto": [
"http"
],
"X-Request-Id": [
"22e6eaf3-fb3e-42c0-81b1-41462c19c8c8"
]
}
}
(1) 기본 로깅 레벨 조회
모든 로거가 info 레벨로 표시됨
docker run -it --rm --link proxy curlimages/curl curl -X POST http://proxy:15000/logging
✅ 출력
active loggers:
admin: info
aws: info
assert: info
backtrace: info
cache_filter: info
client: info
config: info
connection: info
conn_handler: info
decompression: info
dubbo: info
envoy_bug: info
ext_authz: info
rocketmq: info
file: info
filter: info
forward_proxy: info
grpc: info
hc: info
health_checker: info
http: info
http2: info
hystrix: info
init: info
io: info
jwt: info
kafka: info
lua: info
main: info
matcher: info
misc: info
mongo: info
quic: info
quic_stream: info
pool: info
rbac: info
redis: info
router: info
runtime: info
stats: info
secret: info
tap: info
testing: info
thrift: info
tracing: info
upstream: info
udp: info
wasm: info
(2) HTTP 로깅만 debug 레벨로 변경
docker run -it --rm --link proxy curlimages/curl curl -X POST http://proxy:15000/logging\?http\=debug
✅ 출력
active loggers:
admin: info
aws: info
assert: info
backtrace: info
cache_filter: info
client: info
config: info
connection: info
conn_handler: info
decompression: info
dubbo: info
envoy_bug: info
ext_authz: info
rocketmq: info
file: info
filter: info
forward_proxy: info
grpc: info
hc: info
health_checker: info
http: debug
http2: info
hystrix: info
init: info
io: info
jwt: info
kafka: info
lua: info
main: info
matcher: info
misc: info
mongo: info
quic: info
quic_stream: info
pool: info
rbac: info
redis: info
router: info
runtime: info
stats: info
secret: info
tap: info
testing: info
thrift: info
tracing: info
upstream: info
udp: info
wasm: info
[2025-04-19 11:41:51.447][1][debug][http] [source/common/http/conn_manager_impl.cc:1456] [C3][S11192091099210294253] encoding headers via codec (end_stream=false):
':status', '200'
'content-type', 'text/plain; charset=UTF-8'
'cache-control', 'no-cache, max-age=0'
'x-content-type-options', 'nosniff'
'date', 'Sat, 19 Apr 2025 11:41:51 GMT'
'server', 'envoy'
(1) 0.5초 지연 요청 테스트
docker run -it --rm --link proxy curlimages/curl curl -X GET http://proxy:15001/delay/0.5
✅ 출력
{
"args": {},
"headers": {
"Accept": [
"*/*"
],
"Host": [
"httpbin"
],
"User-Agent": [
"curl/8.13.0"
],
"X-Envoy-Expected-Rq-Timeout-Ms": [
"1000"
],
"X-Forwarded-Proto": [
"http"
],
"X-Request-Id": [
"38a3bb6d-a4a3-48ff-893e-de6fe1272312"
]
},
"method": "GET",
"origin": "172.17.0.3:51818",
"url": "http://httpbin/delay/0.5",
"data": "",
"files": {},
"form": {},
"json": null
}
[2025-04-19 11:43:24.074][45][debug][http] [source/common/http/filter_manager.cc:808] [C4][S1849126035353602963] request end stream
[2025-04-19 11:43:24.576][45][debug][http] [source/common/http/conn_manager_impl.cc:1456] [C4][S1849126035353602963] encoding headers via codec (end_stream=false):
':status', '200'
'access-control-allow-credentials', 'true'
'access-control-allow-origin', '*'
'content-type', 'application/json; charset=utf-8'
'server-timing', 'initial_delay;dur=500.00;desc="initial delay"'
'date', 'Sat, 19 Apr 2025 11:43:24 GMT'
'content-length', '483'
'x-envoy-upstream-service-time', '501'
'server', 'envoy'
(2) 1초 지연 요청 테스트 (타임아웃)
docker run -it --rm --link proxy curlimages/curl curl -X GET http://proxy:15001/delay/1
✅ 출력
upstream request timeout%
[2025-04-19 11:44:22.733][48][debug][http] [source/common/http/filter_manager.cc:808] [C6][S16463014453874968413] request end stream
[2025-04-19 11:44:23.733][48][debug][http] [source/common/http/filter_manager.cc:909] [C6][S16463014453874968413] Sending local reply with details upstream_response_timeout
[2025-04-19 11:44:23.733][48][debug][http] [source/common/http/conn_manager_impl.cc:1456] [C6][S16463014453874968413] encoding headers via codec (end_stream=false):
':status', '504'
'content-length', '24'
'content-type', 'text/plain'
'date', 'Sat, 19 Apr 2025 11:44:23 GMT'
'server', 'envoy'
(3) 2초 지연 요청 테스트 (타임아웃)
docker run -it --rm --link proxy curlimages/curl curl -X GET http://proxy:15001/delay/2
✅ 출력
upstream request timeout%
[2025-04-19 11:45:27.485][37][debug][http] [source/common/http/filter_manager.cc:808] [C8][S7626896203224584865] request end stream
[2025-04-19 11:45:28.485][37][debug][http] [source/common/http/filter_manager.cc:909] [C8][S7626896203224584865] Sending local reply with details upstream_response_timeout
[2025-04-19 11:45:28.485][37][debug][http] [source/common/http/conn_manager_impl.cc:1456] [C8][S7626896203224584865] encoding headers via codec (end_stream=false):
':status', '504'
'content-length', '24'
'content-type', 'text/plain'
'date', 'Sat, 19 Apr 2025 11:45:28 GMT'
'server', 'envoy'
엔보이의 Admin API를 통해 프록시 동작, 메트릭, 설정 정보 실시간으로 확인 가능
응답은 리스너, 클러스터, 서버에 대한 통계 및 메트릭
docker run -it --rm --link proxy curlimages/curl curl -X GET http://proxy:15000/stats
✅ 출력
cluster.httpbin_service.assignment_stale: 0
cluster.httpbin_service.assignment_timeout_received: 0
cluster.httpbin_service.bind_errors: 0
cluster.httpbin_service.circuit_breakers.default.cx_open: 0
cluster.httpbin_service.circuit_breakers.default.cx_pool_open: 0
cluster.httpbin_service.circuit_breakers.default.rq_open: 0
cluster.httpbin_service.circuit_breakers.default.rq_pending_open: 0
cluster.httpbin_service.circuit_breakers.default.rq_retry_open: 0
cluster.httpbin_service.circuit_breakers.high.cx_open: 0
cluster.httpbin_service.circuit_breakers.high.cx_pool_open: 0
cluster.httpbin_service.circuit_breakers.high.rq_open: 0
cluster.httpbin_service.circuit_breakers.high.rq_pending_open: 0
cluster.httpbin_service.circuit_breakers.high.rq_retry_open: 0
cluster.httpbin_service.default.total_match_count: 1
cluster.httpbin_service.external.upstream_rq_200: 2
cluster.httpbin_service.external.upstream_rq_2xx: 2
cluster.httpbin_service.external.upstream_rq_504: 2
cluster.httpbin_service.external.upstream_rq_5xx: 2
cluster.httpbin_service.external.upstream_rq_completed: 4
cluster.httpbin_service.http1.dropped_headers_with_underscores: 0
cluster.httpbin_service.http1.metadata_not_supported_error: 0
cluster.httpbin_service.http1.requests_rejected_with_underscores_in_headers: 0
cluster.httpbin_service.http1.response_flood: 0
cluster.httpbin_service.lb_healthy_panic: 0
...
docker run -it --rm --link proxy curlimages/curl curl -X GET http://proxy:15000/stats | grep retry
✅ 출력
cluster.httpbin_service.circuit_breakers.default.rq_retry_open: 0
cluster.httpbin_service.circuit_breakers.high.rq_retry_open: 0
cluster.httpbin_service.retry_or_shadow_abandoned: 0
cluster.httpbin_service.upstream_rq_retry: 0
cluster.httpbin_service.upstream_rq_retry_backoff_exponential: 0
cluster.httpbin_service.upstream_rq_retry_backoff_ratelimited: 0
cluster.httpbin_service.upstream_rq_retry_limit_exceeded: 0
cluster.httpbin_service.upstream_rq_retry_overflow: 0
cluster.httpbin_service.upstream_rq_retry_success: 0
vhost.httpbin_local_service.vcluster.other.upstream_rq_retry: 0
vhost.httpbin_local_service.vcluster.other.upstream_rq_retry_limit_exceeded: 0
vhost.httpbin_local_service.vcluster.other.upstream_rq_retry_overflow: 0
vhost.httpbin_local_service.vcluster.other.upstream_rq_retry_success: 0
(1) 인증서 목록 조회
docker run -it --rm --link proxy curlimages/curl curl -X GET http://proxy:15000/certs
✅ 출력
{
"certificates": []
}
(2) 클러스터 설정 조회
docker run -it --rm --link proxy curlimages/curl curl -X GET http://proxy:15000/clusters
✅ 출력
httpbin_service::observability_name::httpbin_service
httpbin_service::default_priority::max_connections::1024
httpbin_service::default_priority::max_pending_requests::1024
httpbin_service::default_priority::max_requests::1024
httpbin_service::default_priority::max_retries::3
httpbin_service::high_priority::max_connections::1024
httpbin_service::high_priority::max_pending_requests::1024
httpbin_service::high_priority::max_requests::1024
httpbin_service::high_priority::max_retries::3
httpbin_service::added_via_api::false
httpbin_service::172.17.0.2:8000::cx_active::0
httpbin_service::172.17.0.2:8000::cx_connect_fail::0
httpbin_service::172.17.0.2:8000::cx_total::4
httpbin_service::172.17.0.2:8000::rq_active::0
httpbin_service::172.17.0.2:8000::rq_error::2
httpbin_service::172.17.0.2:8000::rq_success::2
httpbin_service::172.17.0.2:8000::rq_timeout::2
httpbin_service::172.17.0.2:8000::rq_total::4
httpbin_service::172.17.0.2:8000::hostname::httpbin
httpbin_service::172.17.0.2:8000::health_flags::healthy
httpbin_service::172.17.0.2:8000::weight::1
httpbin_service::172.17.0.2:8000::region::
httpbin_service::172.17.0.2:8000::zone::
httpbin_service::172.17.0.2:8000::sub_zone::
httpbin_service::172.17.0.2:8000::canary::false
httpbin_service::172.17.0.2:8000::priority::0
httpbin_service::172.17.0.2:8000::success_rate::-1.0
httpbin_service::172.17.0.2:8000::local_origin_success_rate::-1.0
(3) 리스너 정보 조회
docker run -it --rm --link proxy curlimages/curl curl -X GET http://proxy:15000/listeners
✅ 출력
httpbin-demo::0.0.0.0:15001
(4) 로깅 레벨 조회 및 변경
기본 조회
docker run -it --rm --link proxy curlimages/curl curl -X POST http://proxy:15000/logging
✅ 출력
active loggers:
admin: info
aws: info
assert: info
backtrace: info
cache_filter: info
client: info
config: info
connection: info
conn_handler: info
decompression: info
dubbo: info
envoy_bug: info
ext_authz: info
rocketmq: info
file: info
filter: info
forward_proxy: info
grpc: info
hc: info
health_checker: info
http: debug
http2: info
hystrix: info
init: info
io: info
jwt: info
kafka: info
lua: info
main: info
matcher: info
misc: info
mongo: info
quic: info
quic_stream: info
pool: info
rbac: info
redis: info
router: info
runtime: info
stats: info
secret: info
tap: info
testing: info
thrift: info
tracing: info
upstream: info
udp: info
wasm: info
HTTP 로거를 debug로 변경
docker run -it --rm --link proxy curlimages/curl curl -X POST http://proxy:15000/logging\?http\=debug
✅ 출력
active loggers:
admin: info
aws: info
assert: info
backtrace: info
cache_filter: info
client: info
config: info
connection: info
conn_handler: info
decompression: info
dubbo: info
envoy_bug: info
ext_authz: info
rocketmq: info
file: info
filter: info
forward_proxy: info
grpc: info
hc: info
health_checker: info
http: debug
http2: info
hystrix: info
init: info
io: info
jwt: info
kafka: info
lua: info
main: info
matcher: info
misc: info
mongo: info
quic: info
quic_stream: info
pool: info
rbac: info
redis: info
router: info
runtime: info
stats: info
secret: info
tap: info
testing: info
thrift: info
tracing: info
upstream: info
udp: info
wasm: info
(5) Prometheus 형식 통계 확인
docker run -it --rm --link proxy curlimages/curl curl -X GET http://proxy:15000/stats/prometheus
✅ 출력
# TYPE envoy_cluster_assignment_stale counter
envoy_cluster_assignment_stale{envoy_cluster_name="httpbin_service"} 0
# TYPE envoy_cluster_assignment_timeout_received counter
envoy_cluster_assignment_timeout_received{envoy_cluster_name="httpbin_service"} 0
# TYPE envoy_cluster_bind_errors counter
envoy_cluster_bind_errors{envoy_cluster_name="httpbin_service"} 0
# TYPE envoy_cluster_default_total_match_count counter
envoy_cluster_default_total_match_count{envoy_cluster_name="httpbin_service"} 1
# TYPE envoy_cluster_external_upstream_rq counter
envoy_cluster_external_upstream_rq{envoy_response_code="200",envoy_cluster_name="httpbin_service"} 2
envoy_cluster_external_upstream_rq{envoy_response_code="504",envoy_cluster_name="httpbin_service"} 2
# TYPE envoy_cluster_external_upstream_rq_completed counter
envoy_cluster_external_upstream_rq_completed{envoy_cluster_name="httpbin_service"} 4
# TYPE envoy_cluster_external_upstream_rq_xx counter
envoy_cluster_external_upstream_rq_xx{envoy_response_code_class="2",envoy_cluster_name="httpbin_service"} 2
envoy_cluster_external_upstream_rq_xx{envoy_response_code_class="5",envoy_cluster_name="httpbin_service"} 2
# TYPE envoy_cluster_http1_dropped_headers_with_underscores counter
envoy_cluster_http1_dropped_headers_with_underscores{envoy_cluster_name="httpbin_service"} 0
...
httpbin 요청이 5xx 에러를 반환할 때 Envoy가 자동으로 재시도하는지 확인
cat ch3/simple_retry.yaml
✅ 출력
admin:
address:
socket_address: { address: 0.0.0.0, port_value: 15000 }
static_resources:
listeners:
- name: httpbin-demo
address:
socket_address: { address: 0.0.0.0, port_value: 15001 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
http_filters:
- name: envoy.filters.http.router
route_config:
name: httpbin_local_route
virtual_hosts:
- name: httpbin_local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route:
auto_host_rewrite: true
cluster: httpbin_service
retry_policy:
retry_on: 5xx # 5xx 일때 재시도
num_retries: 3 # 재시도 횟수
clusters:
- name: httpbin_service
connect_timeout: 5s
type: LOGICAL_DNS
dns_lookup_family: V4_ONLY
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: httpbin
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: httpbin
port_value: 8000
# 기존 프록시 컨테이너 제거
docker rm -f proxy
# 새로운 설정으로 엔보이 실행
docker run -p 15000:15000 --name proxy --link httpbin envoyproxy/envoy:v1.19.0 --config-yaml "$(cat ch3/simple_retry.yaml)"
✅ 출력
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:338] initializing epoch 0 (base id=0, hot restart version=11.104)
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:340] statically linked extensions:
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.thrift_proxy.filters: envoy.filters.thrift.rate_limit, envoy.filters.thrift.router
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.thrift_proxy.transports: auto, framed, header, unframed
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.request_id: envoy.request_id.uuid
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.tracers: envoy.dynamic.ot, envoy.lightstep, envoy.tracers.datadog, envoy.tracers.dynamic_ot, envoy.tracers.lightstep, envoy.tracers.opencensus, envoy.tracers.skywalking, envoy.tracers.xray, envoy.tracers.zipkin, envoy.zipkin
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.dubbo_proxy.route_matchers: default
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.thrift_proxy.protocols: auto, binary, binary/non-strict, compact, twitter
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.compression.decompressor: envoy.compression.brotli.decompressor, envoy.compression.gzip.decompressor
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.filters.http: envoy.bandwidth_limit, envoy.buffer, envoy.cors, envoy.csrf, envoy.ext_authz, envoy.ext_proc, envoy.fault, envoy.filters.http.adaptive_concurrency, envoy.filters.http.admission_control, envoy.filters.http.alternate_protocols_cache, envoy.filters.http.aws_lambda, envoy.filters.http.aws_request_signing, envoy.filters.http.bandwidth_limit, envoy.filters.http.buffer, envoy.filters.http.cache, envoy.filters.http.cdn_loop, envoy.filters.http.composite, envoy.filters.http.compressor, envoy.filters.http.cors, envoy.filters.http.csrf, envoy.filters.http.decompressor, envoy.filters.http.dynamic_forward_proxy, envoy.filters.http.dynamo, envoy.filters.http.ext_authz, envoy.filters.http.ext_proc, envoy.filters.http.fault, envoy.filters.http.grpc_http1_bridge, envoy.filters.http.grpc_http1_reverse_bridge, envoy.filters.http.grpc_json_transcoder, envoy.filters.http.grpc_stats, envoy.filters.http.grpc_web, envoy.filters.http.header_to_metadata, envoy.filters.http.health_check, envoy.filters.http.ip_tagging, envoy.filters.http.jwt_authn, envoy.filters.http.local_ratelimit, envoy.filters.http.lua, envoy.filters.http.oauth2, envoy.filters.http.on_demand, envoy.filters.http.original_src, envoy.filters.http.ratelimit, envoy.filters.http.rbac, envoy.filters.http.router, envoy.filters.http.set_metadata, envoy.filters.http.squash, envoy.filters.http.tap, envoy.filters.http.wasm, envoy.grpc_http1_bridge, envoy.grpc_json_transcoder, envoy.grpc_web, envoy.health_check, envoy.http_dynamo_filter, envoy.ip_tagging, envoy.local_rate_limit, envoy.lua, envoy.rate_limit, envoy.router, envoy.squash, match-wrapper
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.filters.listener: envoy.filters.listener.http_inspector, envoy.filters.listener.original_dst, envoy.filters.listener.original_src, envoy.filters.listener.proxy_protocol, envoy.filters.listener.tls_inspector, envoy.listener.http_inspector, envoy.listener.original_dst, envoy.listener.original_src, envoy.listener.proxy_protocol, envoy.listener.tls_inspector
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.stats_sinks: envoy.dog_statsd, envoy.graphite_statsd, envoy.metrics_service, envoy.stat_sinks.dog_statsd, envoy.stat_sinks.graphite_statsd, envoy.stat_sinks.hystrix, envoy.stat_sinks.metrics_service, envoy.stat_sinks.statsd, envoy.stat_sinks.wasm, envoy.statsd
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.http.stateful_header_formatters: preserve_case
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.http.cache: envoy.extensions.http.cache.simple
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.filters.network: envoy.client_ssl_auth, envoy.echo, envoy.ext_authz, envoy.filters.network.client_ssl_auth, envoy.filters.network.connection_limit, envoy.filters.network.direct_response, envoy.filters.network.dubbo_proxy, envoy.filters.network.echo, envoy.filters.network.ext_authz, envoy.filters.network.http_connection_manager, envoy.filters.network.kafka_broker, envoy.filters.network.local_ratelimit, envoy.filters.network.mongo_proxy, envoy.filters.network.mysql_proxy, envoy.filters.network.postgres_proxy, envoy.filters.network.ratelimit, envoy.filters.network.rbac, envoy.filters.network.redis_proxy, envoy.filters.network.rocketmq_proxy, envoy.filters.network.sni_cluster, envoy.filters.network.sni_dynamic_forward_proxy, envoy.filters.network.tcp_proxy, envoy.filters.network.thrift_proxy, envoy.filters.network.wasm, envoy.filters.network.zookeeper_proxy, envoy.http_connection_manager, envoy.mongo_proxy, envoy.ratelimit, envoy.redis_proxy, envoy.tcp_proxy
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.dubbo_proxy.protocols: dubbo
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.resource_monitors: envoy.resource_monitors.fixed_heap, envoy.resource_monitors.injected_resource
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.grpc_credentials: envoy.grpc_credentials.aws_iam, envoy.grpc_credentials.default, envoy.grpc_credentials.file_based_metadata
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.clusters: envoy.cluster.eds, envoy.cluster.logical_dns, envoy.cluster.original_dst, envoy.cluster.static, envoy.cluster.strict_dns, envoy.clusters.aggregate, envoy.clusters.dynamic_forward_proxy, envoy.clusters.redis
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.tls.cert_validator: envoy.tls.cert_validator.default, envoy.tls.cert_validator.spiffe
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.matching.action: composite-action, skip
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.transport_sockets.upstream: envoy.transport_sockets.alts, envoy.transport_sockets.quic, envoy.transport_sockets.raw_buffer, envoy.transport_sockets.starttls, envoy.transport_sockets.tap, envoy.transport_sockets.tls, envoy.transport_sockets.upstream_proxy_protocol, raw_buffer, starttls, tls
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.wasm.runtime: envoy.wasm.runtime.null, envoy.wasm.runtime.v8
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.upstreams: envoy.filters.connection_pools.tcp.generic
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.rate_limit_descriptors: envoy.rate_limit_descriptors.expr
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.compression.compressor: envoy.compression.brotli.compressor, envoy.compression.gzip.compressor
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.bootstrap: envoy.bootstrap.wasm, envoy.extensions.network.socket_interface.default_socket_interface
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.transport_sockets.downstream: envoy.transport_sockets.alts, envoy.transport_sockets.quic, envoy.transport_sockets.raw_buffer, envoy.transport_sockets.starttls, envoy.transport_sockets.tap, envoy.transport_sockets.tls, raw_buffer, starttls, tls
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.filters.udp_listener: envoy.filters.udp.dns_filter, envoy.filters.udp_listener.udp_proxy
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.http.original_ip_detection: envoy.http.original_ip_detection.custom_header, envoy.http.original_ip_detection.xff
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.health_checkers: envoy.health_checkers.redis
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.formatter: envoy.formatter.req_without_query
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.resolvers: envoy.ip
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.matching.http.input: request-headers, request-trailers, response-headers, response-trailers
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.dubbo_proxy.filters: envoy.filters.dubbo.router
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.quic.proof_source: envoy.quic.proof_source.filter_chain
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.quic.server.crypto_stream: envoy.quic.crypto_stream.server.quiche
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.retry_priorities: envoy.retry_priorities.previous_priorities
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.upstream_options: envoy.extensions.upstreams.http.v3.HttpProtocolOptions, envoy.upstreams.http.http_protocol_options
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.dubbo_proxy.serializers: dubbo.hessian2
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.access_loggers: envoy.access_loggers.file, envoy.access_loggers.http_grpc, envoy.access_loggers.open_telemetry, envoy.access_loggers.stderr, envoy.access_loggers.stdout, envoy.access_loggers.tcp_grpc, envoy.access_loggers.wasm, envoy.file_access_log, envoy.http_grpc_access_log, envoy.open_telemetry_access_log, envoy.stderr_access_log, envoy.stdout_access_log, envoy.tcp_grpc_access_log, envoy.wasm_access_log
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.internal_redirect_predicates: envoy.internal_redirect_predicates.allow_listed_routes, envoy.internal_redirect_predicates.previous_routes, envoy.internal_redirect_predicates.safe_cross_scheme
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.matching.common_inputs: envoy.matching.common_inputs.environment_variable
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.retry_host_predicates: envoy.retry_host_predicates.omit_canary_hosts, envoy.retry_host_predicates.omit_host_metadata, envoy.retry_host_predicates.previous_hosts
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.matching.input_matchers: envoy.matching.matchers.consistent_hashing, envoy.matching.matchers.ip
[2025-04-19 11:59:09.898][1][info][main] [source/server/server.cc:342] envoy.guarddog_actions: envoy.watchdog.abort_action, envoy.watchdog.profile_action
[2025-04-19 11:59:09.901][1][info][main] [source/server/server.cc:358] HTTP header map info:
[2025-04-19 11:59:09.902][1][info][main] [source/server/server.cc:361] request header map: 632 bytes: :authority,:method,:path,:protocol,:scheme,accept,accept-encoding,access-control-request-method,authentication,authorization,cache-control,cdn-loop,connection,content-encoding,content-length,content-type,expect,grpc-accept-encoding,grpc-timeout,if-match,if-modified-since,if-none-match,if-range,if-unmodified-since,keep-alive,origin,pragma,proxy-connection,referer,te,transfer-encoding,upgrade,user-agent,via,x-client-trace-id,x-envoy-attempt-count,x-envoy-decorator-operation,x-envoy-downstream-service-cluster,x-envoy-downstream-service-node,x-envoy-expected-rq-timeout-ms,x-envoy-external-address,x-envoy-force-trace,x-envoy-hedge-on-per-try-timeout,x-envoy-internal,x-envoy-ip-tags,x-envoy-max-retries,x-envoy-original-path,x-envoy-original-url,x-envoy-retriable-header-names,x-envoy-retriable-status-codes,x-envoy-retry-grpc-on,x-envoy-retry-on,x-envoy-upstream-alt-stat-name,x-envoy-upstream-rq-per-try-timeout-ms,x-envoy-upstream-rq-timeout-alt-response,x-envoy-upstream-rq-timeout-ms,x-forwarded-client-cert,x-forwarded-for,x-forwarded-proto,x-ot-span-context,x-request-id
[2025-04-19 11:59:09.902][1][info][main] [source/server/server.cc:361] request trailer map: 136 bytes:
[2025-04-19 11:59:09.902][1][info][main] [source/server/server.cc:361] response header map: 432 bytes: :status,access-control-allow-credentials,access-control-allow-headers,access-control-allow-methods,access-control-allow-origin,access-control-expose-headers,access-control-max-age,age,cache-control,connection,content-encoding,content-length,content-type,date,etag,expires,grpc-message,grpc-status,keep-alive,last-modified,location,proxy-connection,server,transfer-encoding,upgrade,vary,via,x-envoy-attempt-count,x-envoy-decorator-operation,x-envoy-degraded,x-envoy-immediate-health-check-fail,x-envoy-ratelimited,x-envoy-upstream-canary,x-envoy-upstream-healthchecked-cluster,x-envoy-upstream-service-time,x-request-id
[2025-04-19 11:59:09.902][1][info][main] [source/server/server.cc:361] response trailer map: 160 bytes: grpc-message,grpc-status
[2025-04-19 11:59:09.917][1][info][admin] [source/server/admin/admin.cc:132] admin address: 0.0.0.0:15000
[2025-04-19 11:59:09.917][1][info][main] [source/server/server.cc:707] runtime: {}
[2025-04-19 11:59:09.917][1][info][config] [source/server/configuration_impl.cc:127] loading tracing configuration
[2025-04-19 11:59:09.917][1][info][config] [source/server/configuration_impl.cc:87] loading 0 static secret(s)
[2025-04-19 11:59:09.917][1][info][config] [source/server/configuration_impl.cc:93] loading 1 cluster(s)
[2025-04-19 11:59:09.918][1][info][config] [source/server/configuration_impl.cc:97] loading 1 listener(s)
[2025-04-19 11:59:09.919][1][info][config] [source/server/configuration_impl.cc:109] loading stats configuration
[2025-04-19 11:59:09.919][1][info][runtime] [source/common/runtime/runtime_impl.cc:449] RTDS has finished initialization
[2025-04-19 11:59:09.919][1][info][upstream] [source/common/upstream/cluster_manager_impl.cc:206] cm init: all clusters initialized
[2025-04-19 11:59:09.919][1][warning][main] [source/server/server.cc:682] there is no configured limit to the number of allowed active connections. Set a limit via the runtime key overload.global_downstream_max_connections
[2025-04-19 11:59:09.919][1][info][main] [source/server/server.cc:785] all clusters initialized. initializing init manager
[2025-04-19 11:59:09.919][1][info][config] [source/server/listener_manager_impl.cc:834] all dependencies initialized. starting workers
[2025-04-19 11:59:09.921][1][info][main] [source/server/server.cc:804] starting main dispatch loop
(1) /stats/500 엔드포인트 호출
docker run -it --rm --link proxy curlimages/curl curl -X GET http://proxy:15001/status/500
✅ 출력
[2025-04-19 12:01:14.982][47][debug][http] [source/common/http/filter_manager.cc:808] [C1][S10884054125092168206] request end stream
[2025-04-19 12:01:15.079][47][debug][http] [source/common/http/conn_manager_impl.cc:1456] [C1][S10884054125092168206] encoding headers via codec (end_stream=true):
':status', '500'
'access-control-allow-credentials', 'true'
'access-control-allow-origin', '*'
'content-type', 'text/plain; charset=utf-8'
'date', 'Sat, 19 Apr 2025 12:01:15 GMT'
'content-length', '0'
'x-envoy-upstream-service-time', '96'
'server', 'envoy'
(2) Admin API에서 재시도 통계 조회
cluster.httpbin_service.retry.upstream_rq_completed: 3
docker run -it --rm --link proxy curlimages/curl curl -X GET http://proxy:15000/stats | grep retry
✅ 출력
cluster.httpbin_service.circuit_breakers.default.rq_retry_open: 0
cluster.httpbin_service.circuit_breakers.high.rq_retry_open: 0
cluster.httpbin_service.retry.upstream_rq_500: 3
cluster.httpbin_service.retry.upstream_rq_5xx: 3
cluster.httpbin_service.retry.upstream_rq_completed: 3
cluster.httpbin_service.retry_or_shadow_abandoned: 0
cluster.httpbin_service.upstream_rq_retry: 3
cluster.httpbin_service.upstream_rq_retry_backoff_exponential: 3
cluster.httpbin_service.upstream_rq_retry_backoff_ratelimited: 0
cluster.httpbin_service.upstream_rq_retry_limit_exceeded: 1
cluster.httpbin_service.upstream_rq_retry_overflow: 0
cluster.httpbin_service.upstream_rq_retry_success: 0
vhost.httpbin_local_service.vcluster.other.upstream_rq_retry: 0
vhost.httpbin_local_service.vcluster.other.upstream_rq_retry_limit_exceeded: 0
vhost.httpbin_local_service.vcluster.other.upstream_rq_retry_overflow: 0
vhost.httpbin_local_service.vcluster.other.upstream_rq_retry_success: 0
docker rm -f proxy && docker rm -f httpbin
# 결과
proxy
httpbin
pwd
✅ 출력
/home/devshin/workspace/istio/istio-in-action/book-source-code-master
kind create cluster --name myk8s --image kindest/node:v1.23.17 --config - <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30000 # Sample Application (istio-ingrssgateway) HTTP
hostPort: 30000
- containerPort: 30001 # Prometheus
hostPort: 30001
- containerPort: 30002 # Grafana
hostPort: 30002
- containerPort: 30003 # Kiali
hostPort: 30003
- containerPort: 30004 # Tracing
hostPort: 30004
- containerPort: 30005 # Sample Application (istio-ingrssgateway) HTTPS
hostPort: 30005
- containerPort: 30006 # TCP Route
hostPort: 30006
- containerPort: 30007 # New Gateway
hostPort: 30007
extraMounts: # 해당 부분 생략 가능
- hostPath: /home/devshin/workspace/istio/istio-in-action/book-source-code-master # 각자 자신의 pwd 경로로 설정
containerPath: /istiobook
networking:
podSubnet: 10.10.0.0/16
serviceSubnet: 10.200.1.0/24
EOF
# 결과
Creating cluster "myk8s" ...
✓ Ensuring node image (kindest/node:v1.23.17) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-myk8s"
You can now use your cluster with:
kubectl cluster-info --context kind-myk8s
Have a nice day! 👋
docker ps
✅ 출력
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3846d71ad92d kindest/node:v1.23.17 "/usr/local/bin/entr…" 55 seconds ago Up 53 seconds 0.0.0.0:30000-30007->30000-30007/tcp, 127.0.0.1:45753->6443/tcp myk8s-control-plane
docker exec -it myk8s-control-plane sh -c 'apt update && apt install tree psmisc lsof wget bridge-utils net-tools dnsutils tcpdump ngrep iputils-ping git vim -y'
✅ 출력
...
Setting up libgdbm-compat4:amd64 (1.23-3) ...
Setting up xauth (1:1.1.2-1) ...
Setting up bind9-host (1:9.18.33-1~deb12u2) ...
Setting up libperl5.36:amd64 (5.36.0-7+deb12u2) ...
Setting up tcpdump (4.99.3-1) ...
Setting up ngrep (1.47+ds1-5+b1) ...
Setting up perl (5.36.0-7+deb12u2) ...
Setting up bind9-dnsutils (1:9.18.33-1~deb12u2) ...
Setting up dnsutils (1:9.18.33-1~deb12u2) ...
Setting up liberror-perl (0.17029-2) ...
Setting up git (1:2.39.5-0+deb12u2) ...
Processing triggers for libc-bin (2.36-9+deb12u4) ...
helm repo add metrics-server https://kubernetes-sigs.github.io/metrics-server/
# 결과
"metrics-server" already exists with the same configuration, skipping
helm install metrics-server metrics-server/metrics-server --set 'args[0]=--kubelet-insecure-tls' -n kube-system
✅ 출력
NAME: metrics-server
LAST DEPLOYED: Sat Apr 19 21:18:21 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
***********************************************************************
* Metrics Server *
***********************************************************************
Chart version: 3.12.2
App version: 0.7.2
Image tag: registry.k8s.io/metrics-server/metrics-server:v0.7.2
***********************************************************************
Metrics Server 관련 리소스 확인
kubectl get all -n kube-system -l app.kubernetes.io/instance=metrics-server
✅ 출력
NAME READY STATUS RESTARTS AGE
pod/metrics-server-65bb6f47b6-t5bh5 0/1 Running 0 30s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/metrics-server ClusterIP 10.200.1.7 <none> 443/TCP 30s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/metrics-server 0/1 1 0 30s
NAME DESIRED CURRENT READY AGE
replicaset.apps/metrics-server-65bb6f47b6 1 1 0 30s
docker exec -it myk8s-control-plane bash
root@myk8s-control-plane:/#
root@myk8s-control-plane:/# tree /istiobook/ -L 1
✅ 출력
/istiobook/
|-- README.md
|-- appendices
|-- bin
|-- ch10
|-- ch11
|-- ch12
|-- ch13
|-- ch14
|-- ch2
|-- ch3
|-- ch4
|-- ch5
|-- ch6
|-- ch7
|-- ch8
|-- ch9
`-- services
17 directories, 1 file
root@myk8s-control-plane:/# export ISTIOV=1.17.8
echo 'export ISTIOV=1.17.8' >> /root/.bashrc
curl -s -L https://istio.io/downloadIstio | ISTIO_VERSION=$ISTIOV sh -
cp istio-$ISTIOV/bin/istioctl /usr/local/bin/istioctl
istioctl version --remote=false
✅ 출력
Downloading istio-1.17.8 from https://github.com/istio/istio/releases/download/1.17.8/istio-1.17.8-linux-amd64.tar.gz ...
Istio 1.17.8 download complete!
The Istio release archive has been downloaded to the istio-1.17.8 directory.
To configure the istioctl client tool for your workstation,
add the /istio-1.17.8/bin directory to your environment path variable with:
export PATH="$PATH:/istio-1.17.8/bin"
Begin the Istio pre-installation check by running:
istioctl x precheck
Try Istio in ambient mode
https://istio.io/latest/docs/ambient/getting-started/
Try Istio in sidecar mode
https://istio.io/latest/docs/setup/getting-started/
Install guides for ambient mode
https://istio.io/latest/docs/ambient/install/
Install guides for sidecar mode
https://istio.io/latest/docs/setup/install/
Need more information? Visit https://istio.io/latest/docs/
1.17.8
root@myk8s-control-plane:/# istioctl install --set profile=default -y
✅ 출력
✔ Istio core installed
✔ Istiod installed
✔ Ingress gateways installed
✔ Installation complete Making this installation the default for injection and validation.
Thank you for installing Istio 1.17. Please take a few minutes to tell us about your install/upgrade experience! https://forms.gle/hMHGiwZHPU7UQRWe9
root@myk8s-control-plane:/# kubectl apply -f istio-$ISTIOV/samples/addons
✅ 출력
serviceaccount/grafana created
configmap/grafana created
service/grafana created
deployment.apps/grafana created
configmap/istio-grafana-dashboards created
configmap/istio-services-grafana-dashboards created
deployment.apps/jaeger created
service/tracing created
service/zipkin created
service/jaeger-collector created
serviceaccount/kiali created
configmap/kiali created
clusterrole.rbac.authorization.k8s.io/kiali-viewer created
clusterrole.rbac.authorization.k8s.io/kiali created
clusterrolebinding.rbac.authorization.k8s.io/kiali created
role.rbac.authorization.k8s.io/kiali-controlplane created
rolebinding.rbac.authorization.k8s.io/kiali-controlplane created
service/kiali created
deployment.apps/kiali created
serviceaccount/prometheus created
configmap/prometheus created
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
service/prometheus created
deployment.apps/prometheus created
root@myk8s-control-plane:/# kubectl get pod -n istio-system
✅ 출력
NAME READY STATUS RESTARTS AGE
grafana-b854c6c8-bwdt9 1/1 Running 0 48s
istio-ingressgateway-996bc6bb6-ll8hl 1/1 Running 0 108s
istiod-7df6ffc78d-jh6x2 1/1 Running 0 2m
jaeger-5556cd8fcf-k9pb9 1/1 Running 0 48s
kiali-648847c8c4-jmwt8 0/1 Running 0 47s
prometheus-7b8b9dd44c-9zgqk 2/2 Running 0 47s
root@myk8s-control-plane:/# exit
exit
kubectl create ns istioinaction
kubectl label namespace istioinaction istio-injection=enabled
kubectl get ns --show-labels
✅ 출력
namespace/istioinaction created
namespace/istioinaction labeled
NAME STATUS AGE LABELS
default Active 14m kubernetes.io/metadata.name=default
istio-system Active 5m12s kubernetes.io/metadata.name=istio-system
istioinaction Active 0s istio-injection=enabled,kubernetes.io/metadata.name=istioinaction
kube-node-lease Active 14m kubernetes.io/metadata.name=kube-node-lease
kube-public Active 14m kubernetes.io/metadata.name=kube-public
kube-system Active 14m kubernetes.io/metadata.name=kube-system
local-path-storage Active 14m kubernetes.io/metadata.name=local-path-storage
kubectl patch svc -n istio-system istio-ingressgateway -p '{"spec": {"type": "NodePort", "ports": [{"port": 80, "targetPort": 8080, "nodePort": 30000}]}}'
kubectl patch svc -n istio-system istio-ingressgateway -p '{"spec": {"type": "NodePort", "ports": [{"port": 443, "targetPort": 8443, "nodePort": 30005}]}}'
kubectl patch svc -n istio-system istio-ingressgateway -p '{"spec":{"externalTrafficPolicy": "Local"}}'
kubectl describe svc -n istio-system istio-ingressgateway
✅ 출력
service/istio-ingressgateway patched
service/istio-ingressgateway patched
service/istio-ingressgateway patched
Name: istio-ingressgateway
Namespace: istio-system
Labels: app=istio-ingressgateway
install.operator.istio.io/owning-resource=unknown
install.operator.istio.io/owning-resource-namespace=istio-system
istio=ingressgateway
istio.io/rev=default
operator.istio.io/component=IngressGateways
operator.istio.io/managed=Reconcile
operator.istio.io/version=1.17.8
release=istio
Annotations: <none>
Selector: app=istio-ingressgateway,istio=ingressgateway
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.200.1.243
IPs: 10.200.1.243
Port: status-port 15021/TCP
TargetPort: 15021/TCP
NodePort: status-port 32563/TCP
Endpoints: 10.10.0.7:15021
Port: http2 80/TCP
TargetPort: 8080/TCP
NodePort: http2 30000/TCP
Endpoints: 10.10.0.7:8080
Port: https 443/TCP
TargetPort: 8443/TCP
NodePort: https 30005/TCP
Endpoints: 10.10.0.7:8443
Session Affinity: None
External Traffic Policy: Local
Internal Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Type 0s service-controller LoadBalancer -> NodePort
Prometheus, Grafana, Kiali, Tracing 각각 30001–30004 포트로 노출
kubectl patch svc -n istio-system prometheus -p '{"spec": {"type": "NodePort", "ports": [{"port": 9090, "targetPort": 9090, "nodePort": 30001}]}}'
kubectl patch svc -n istio-system grafana -p '{"spec": {"type": "NodePort", "ports": [{"port": 3000, "targetPort": 3000, "nodePort": 30002}]}}'
kubectl patch svc -n istio-system kiali -p '{"spec": {"type": "NodePort", "ports": [{"port": 20001, "targetPort": 20001, "nodePort": 30003}]}}'
kubectl patch svc -n istio-system tracing -p '{"spec": {"type": "NodePort", "ports": [{"port": 80, "targetPort": 16686, "nodePort": 30004}]}}'
✅ 출력
service/prometheus patched
service/grafana patched
service/kiali patched
service/tracing patched
Prometheus → http://127.0.0.1:30001

Grafana → http://127.0.0.1:30002

Kiali(NodePort) → http://127.0.0.1:30003

Kiali(Port forward) → http://127.0.0.1:20001
kubectl port-forward deployment/kiali -n istio-system 20001:20001 &

Tracing(Jaeger) → http://127.0.0.1:30004

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: netshoot
spec:
containers:
- name: netshoot
image: nicolaka/netshoot
command: ["tail"]
args: ["-f", "/dev/null"]
terminationGracePeriodSeconds: 0
EOF
# 결과
pod/netshoot created
kubectl get pod -n istio-system -l app=istio-ingressgateway
✅ 출력
NAME READY STATUS RESTARTS AGE
istio-ingressgateway-996bc6bb6-ll8hl 1/1 Running 0 23m
docker exec -it myk8s-control-plane istioctl proxy-status
✅ 출력
NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION
istio-ingressgateway-996bc6bb6-ll8hl.istio-system Kubernetes SYNCED SYNCED SYNCED NOT SENT NOT SENT istiod-7df6ffc78d-jh6x2 1.17.8
docker exec -it myk8s-control-plane istioctl proxy-config all deploy/istio-ingressgateway.istio-system
✅ 출력
SERVICE FQDN PORT SUBSET DIRECTION TYPE DESTINATION RULE
BlackHoleCluster - - - STATIC
agent - - - STATIC
grafana.istio-system.svc.cluster.local 3000 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 80 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 443 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 15021 - outbound EDS
istiod.istio-system.svc.cluster.local 443 - outbound EDS
istiod.istio-system.svc.cluster.local 15010 - outbound EDS
istiod.istio-system.svc.cluster.local 15012 - outbound EDS
istiod.istio-system.svc.cluster.local 15014 - outbound EDS
jaeger-collector.istio-system.svc.cluster.local 9411 - outbound EDS
jaeger-collector.istio-system.svc.cluster.local <14250 - outbound EDS
jaeger-collector.istio-system.svc.cluster.local 14268 - outbound EDS
kiali.istio-system.svc.cluster.local 9090 - outbound EDS
kiali.istio-system.svc.cluster.local 20001 - outbound EDS
kube-dns.kube-system.svc.cluster.local 53 - outbound EDS
kube-dns.kube-system.svc.cluster.local 9153 - outbound EDS
kubernetes.default.svc.cluster.local 443 - outbound EDS
metrics-server.kube-system.svc.cluster.local 443 - outbound EDS
prometheus.istio-system.svc.cluster.local 9090 - outbound EDS
prometheus_stats - - - STATIC
sds-grpc - - - STATIC
tracing.istio-system.svc.cluster.local 80 - outbound EDS
tracing.istio-system.svc.cluster.local 16685 - outbound EDS
xds-grpc - - - STATIC
zipkin - - - STRICT_DNS
zipkin.istio-system.svc.cluster.local 9411 - outbound EDS
ADDRESS PORT MATCH DESTINATION
0.0.0.0 15021 ALL Inline Route: /healthz/ready*
0.0.0.0 15090 ALL Inline Route: /stats/prometheus*
NAME DOMAINS MATCH VIRTUAL SERVICE
* /healthz/ready*
* /stats/prometheus*
RESOURCE NAME TYPE STATUS VALID CERT SERIAL NUMBER NOT AFTER NOT BEFORE
default Cert Chain ACTIVE true 123240359314213262205135588074267083810 2025-04-20T12:24:34Z 2025-04-19T12:22:34Z
ROOTCA CA ACTIVE true 148534163974034505686054778049611152978 2035-04-17T12:24:23Z 2025-04-19T12:24:23Z
docker exec -it myk8s-control-plane istioctl proxy-config listener deploy/istio-ingressgateway.istio-system
✅ 출력
ADDRESS PORT MATCH DESTINATION
0.0.0.0 15021 ALL Inline Route: /healthz/ready*
0.0.0.0 15090 ALL Inline Route: /stats/prometheus*
docker exec -it myk8s-control-plane istioctl proxy-config routes deploy/istio-ingressgateway.istio-system
✅ 출력
NAME DOMAINS MATCH VIRTUAL SERVICE
* /healthz/ready*
* /stats/prometheus*
docker exec -it myk8s-control-plane istioctl proxy-config secret deploy/istio-ingressgateway.istio-system
✅ 출력
RESOURCE NAME TYPE STATUS VALID CERT SERIAL NUMBER NOT AFTER NOT BEFORE
default Cert Chain ACTIVE true 123240359314213262205135588074267083810 2025-04-20T12:24:34Z 2025-04-19T12:22:34Z
ROOTCA CA ACTIVE true 148534163974034505686054778049611152978 2035-04-17T12:24:23Z 2025-04-19T12:24:23Z
kubectl get istiooperators -n istio-system -o yaml
✅ 출력
apiVersion: v1
items:
- apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
annotations:
install.istio.io/ignoreReconcile: "true"
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"install.istio.io/v1alpha1","kind":"IstioOperator","metadata":{"annotations":{"install.istio.io/ignoreReconcile":"true"},"creationTimestamp":null,"name":"installed-state","namespace":"istio-system"},"spec":{"components":{"base":{"enabled":true},"cni":{"enabled":false},"egressGateways":[{"enabled":false,"name":"istio-egressgateway"}],"ingressGateways":[{"enabled":true,"name":"istio-ingressgateway"}],"istiodRemote":{"enabled":false},"pilot":{"enabled":true}},"hub":"docker.io/istio","meshConfig":{"defaultConfig":{"proxyMetadata":{}},"enablePrometheusMerge":true},"profile":"default","tag":"1.17.8","values":{"base":{"enableCRDTemplates":false,"validationURL":""},"defaultRevision":"","gateways":{"istio-egressgateway":{"autoscaleEnabled":true,"env":{},"name":"istio-egressgateway","secretVolumes":[{"mountPath":"/etc/istio/egressgateway-certs","name":"egressgateway-certs","secretName":"istio-egressgateway-certs"},{"mountPath":"/etc/istio/egressgateway-ca-certs","name":"egressgateway-ca-certs","secretName":"istio-egressgateway-ca-certs"}],"type":"ClusterIP"},"istio-ingressgateway":{"autoscaleEnabled":true,"env":{},"name":"istio-ingressgateway","secretVolumes":[{"mountPath":"/etc/istio/ingressgateway-certs","name":"ingressgateway-certs","secretName":"istio-ingressgateway-certs"},{"mountPath":"/etc/istio/ingressgateway-ca-certs","name":"ingressgateway-ca-certs","secretName":"istio-ingressgateway-ca-certs"}],"type":"LoadBalancer"}},"global":{"configValidation":true,"defaultNodeSelector":{},"defaultPodDisruptionBudget":{"enabled":true},"defaultResources":{"requests":{"cpu":"10m"}},"imagePullPolicy":"","imagePullSecrets":[],"istioNamespace":"istio-system","istiod":{"enableAnalysis":false},"jwtPolicy":"third-party-jwt","logAsJson":false,"logging":{"level":"default:info"},"meshNetworks":{},"mountMtlsCerts":false,"multiCluster":{"clusterName":"","enabled":false},"network":"","omitSidecarInjectorConfigMap":false,"oneNamespace":false,"operatorManageWebhooks":false,"pilotCertProvider":"istiod","priorityClassName":"","proxy":{"autoInject":"enabled","clusterDomain":"cluster.local","componentLogLevel":"misc:error","enableCoreDump":false,"excludeIPRanges":"","excludeInboundPorts":"","excludeOutboundPorts":"","image":"proxyv2","includeIPRanges":"*","logLevel":"warning","privileged":false,"readinessFailureThreshold":30,"readinessInitialDelaySeconds":1,"readinessPeriodSeconds":2,"resources":{"limits":{"cpu":"2000m","memory":"1024Mi"},"requests":{"cpu":"100m","memory":"128Mi"}},"statusPort":15020,"tracer":"zipkin"},"proxy_init":{"image":"proxyv2","resources":{"limits":{"cpu":"2000m","memory":"1024Mi"},"requests":{"cpu":"10m","memory":"10Mi"}}},"sds":{"token":{"aud":"istio-ca"}},"sts":{"servicePort":0},"tracer":{"datadog":{},"lightstep":{},"stackdriver":{},"zipkin":{}},"useMCP":false},"istiodRemote":{"injectionURL":""},"pilot":{"autoscaleEnabled":true,"autoscaleMax":5,"autoscaleMin":1,"configMap":true,"cpu":{"targetAverageUtilization":80},"deploymentLabels":null,"enableProtocolSniffingForInbound":true,"enableProtocolSniffingForOutbound":true,"env":{},"image":"pilot","keepaliveMaxServerConnectionAge":"30m","nodeSelector":{},"podLabels":{},"replicaCount":1,"traceSampling":1},"telemetry":{"enabled":true,"v2":{"enabled":true,"metadataExchange":{"wasmEnabled":false},"prometheus":{"enabled":true,"wasmEnabled":false},"stackdriver":{"configOverride":{},"enabled":false,"logging":false,"monitoring":false,"topology":false}}}}}}
creationTimestamp: "2025-04-19T12:24:36Z"
generation: 1
name: installed-state
namespace: istio-system
resourceVersion: "1418"
uid: 44650588-b397-4962-addf-5a6c7e385d1f
spec:
components:
base:
enabled: true
cni:
enabled: false
egressGateways:
- enabled: false
name: istio-egressgateway
ingressGateways:
- enabled: true
name: istio-ingressgateway
istiodRemote:
enabled: false
pilot:
enabled: true
hub: docker.io/istio
meshConfig:
defaultConfig:
proxyMetadata: {}
enablePrometheusMerge: true
profile: default
tag: 1.17.8
values:
base:
enableCRDTemplates: false
validationURL: ""
defaultRevision: ""
gateways:
istio-egressgateway:
autoscaleEnabled: true
env: {}
name: istio-egressgateway
secretVolumes:
- mountPath: /etc/istio/egressgateway-certs
name: egressgateway-certs
secretName: istio-egressgateway-certs
- mountPath: /etc/istio/egressgateway-ca-certs
name: egressgateway-ca-certs
secretName: istio-egressgateway-ca-certs
type: ClusterIP
istio-ingressgateway:
autoscaleEnabled: true
env: {}
name: istio-ingressgateway
secretVolumes:
- mountPath: /etc/istio/ingressgateway-certs
name: ingressgateway-certs
secretName: istio-ingressgateway-certs
- mountPath: /etc/istio/ingressgateway-ca-certs
name: ingressgateway-ca-certs
secretName: istio-ingressgateway-ca-certs
type: LoadBalancer
global:
configValidation: true
defaultNodeSelector: {}
defaultPodDisruptionBudget:
enabled: true
defaultResources:
requests:
cpu: 10m
imagePullPolicy: ""
imagePullSecrets: []
istioNamespace: istio-system
istiod:
enableAnalysis: false
jwtPolicy: third-party-jwt
logAsJson: false
logging:
level: default:info
meshNetworks: {}
mountMtlsCerts: false
multiCluster:
clusterName: ""
enabled: false
network: ""
omitSidecarInjectorConfigMap: false
oneNamespace: false
operatorManageWebhooks: false
pilotCertProvider: istiod
priorityClassName: ""
proxy:
autoInject: enabled
clusterDomain: cluster.local
componentLogLevel: misc:error
enableCoreDump: false
excludeIPRanges: ""
excludeInboundPorts: ""
excludeOutboundPorts: ""
image: proxyv2
includeIPRanges: '*'
logLevel: warning
privileged: false
readinessFailureThreshold: 30
readinessInitialDelaySeconds: 1
readinessPeriodSeconds: 2
resources:
limits:
cpu: 2000m
memory: 1024Mi
requests:
cpu: 100m
memory: 128Mi
statusPort: 15020
tracer: zipkin
proxy_init:
image: proxyv2
resources:
limits:
cpu: 2000m
memory: 1024Mi
requests:
cpu: 10m
memory: 10Mi
sds:
token:
aud: istio-ca
sts:
servicePort: 0
tracer:
datadog: {}
lightstep: {}
stackdriver: {}
zipkin: {}
useMCP: false
istiodRemote:
injectionURL: ""
pilot:
autoscaleEnabled: true
autoscaleMax: 5
autoscaleMin: 1
configMap: true
cpu:
targetAverageUtilization: 80
deploymentLabels: null
enableProtocolSniffingForInbound: true
enableProtocolSniffingForOutbound: true
env: {}
image: pilot
keepaliveMaxServerConnectionAge: 30m
nodeSelector: {}
podLabels: {}
replicaCount: 1
traceSampling: 1
telemetry:
enabled: true
v2:
enabled: true
metadataExchange:
wasmEnabled: false
prometheus:
enabled: true
wasmEnabled: false
stackdriver:
configOverride: {}
enabled: false
logging: false
monitoring: false
topology: false
kind: List
metadata:
resourceVersion: ""
kubectl exec -n istio-system deploy/istio-ingressgateway -- ps
✅ 출력
PID TTY TIME CMD
1 ? 00:00:00 pilot-agent
33 ? 00:00:09 envoy
77 ? 00:00:00 ps
pilot-agent 프로세스가 envoy 를 부트스트랩
kubectl exec -n istio-system deploy/istio-ingressgateway -- ps aux
✅ 출력
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
istio-p+ 1 0.0 0.1 756524 57032 ? Ssl 12:24 0:00 /usr/local/bin/pilot-agent proxy router --domain istio-system.svc.cluster.local --proxyLogLevel=warning --proxyComponentLogLevel=misc:error --log_output_level=default:info
istio-p+ 33 0.5 0.1 448404 60068 ? Sl 12:24 0:10 /usr/local/bin/envoy -c etc/istio/proxy/envoy-rev.json --drain-time-s 45 --drain-strategy immediate --local-address-ip-version v4 --file-flush-interval-msec 1000 --disable-hot-restart --allow-unknown-static-fields --log-format %Y-%m-%dT%T.%fZ.%l.envoy %n %g:%#.%v.thread=%t -l warning --component-log-level misc:error
istio-p+ 83 0.0 0.0 7060 2668 ? Rs 12:54 0:00 ps aux
kubectl exec -n istio-system deploy/istio-ingressgateway -- whoami
✅ 출력
istio-proxy
트래픽을 분기하기 위해 애플리케이션과 사용자 ID를 구분해야 했음
kubectl exec -n istio-system deploy/istio-ingressgateway -- id
✅ 출력
uid=1337(istio-proxy) gid=1337(istio-proxy) groups=1337(istio-proxy)

kubectl stern -n istio-system -l app=istiod
✅ 출력
+ istiod-7df6ffc78d-jh6x2 › discovery
...
istiod-7df6ffc78d-jh6x2 discovery 2025-04-19T13:25:45.594474Z info ads ADS: "10.10.0.7:36318" istio-ingressgateway-996bc6bb6-ll8hl.istio-system-2 terminated
istiod-7df6ffc78d-jh6x2 discovery 2025-04-19T13:25:45.610202Z info ads ADS: new connection for node:istio-ingressgateway-996bc6bb6-ll8hl.istio-system-3
istiod-7df6ffc78d-jh6x2 discovery 2025-04-19T13:25:45.610317Z info ads CDS: PUSH request for node:istio-ingressgateway-996bc6bb6-ll8hl.istio-system resources:22 size:21.8kB cached:21/21
istiod-7df6ffc78d-jh6x2 discovery 2025-04-19T13:25:45.610389Z info ads EDS: PUSH request for node:istio-ingressgateway-996bc6bb6-ll8hl.istio-system resources:21 size:3.6kB empty:0 cached:18/21
istiod-7df6ffc78d-jh6x2 discovery 2025-04-19T13:25:45.610407Z info ads LDS: PUSH request for node:istio-ingressgateway-996bc6bb6-ll8hl.istio-system resources:0 size:0B
ch4/coolstore-gw.yaml 파일에 HTTP(80) 게이트웨이 설정
cat ch4/coolstore-gw.yaml
✅ 출력
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: coolstore-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "webapp.istioinaction.io"
(1) 게이트웨이 생성
kubectl -n istioinaction apply -f ch4/coolstore-gw.yaml
# 결과
gateway.networking.istio.io/coolstore-gateway created
(2) 생성 확인
kubectl get gw,vs -n istioinaction
✅ 출력
NAME AGE
gateway.networking.istio.io/coolstore-gateway 46s
(3) Ingress Gateway와의 동기화 확인
docker exec -it myk8s-control-plane istioctl proxy-status
✅ 출력
NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION
istio-ingressgateway-996bc6bb6-ll8hl.istio-system Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-7df6ffc78d-jh6x2 1.17.8
docker exec -it myk8s-control-plane istioctl proxy-config listener deploy/istio-ingressgateway.istio-system
✅ 출력
ADDRESS PORT MATCH DESTINATION
0.0.0.0 8080 ALL Route: http.8080
0.0.0.0 15021 ALL Inline Route: /healthz/ready*
0.0.0.0 15090 ALL Inline Route: /stats/prometheus*
[WARN] - (starship::utils): Executing command "/usr/bin/git" timed out.
[WARN] - (starship::utils): You can set command_timeout in your config to a higher value to allow longer-running commands to keep executing.
Gateway에 바인딩된 기본 라우트 상태 확인
docker exec -it myk8s-control-plane istioctl proxy-config routes deploy/istio-ingressgateway.istio-system
✅ 출력
NAME DOMAINS MATCH VIRTUAL SERVICE
http.8080 * /* 404
* /healthz/ready*
* /stats/prometheus*
http.8080은 아직 VirtualService가 없어 BlackHole(404) 처리kubectl get svc -n istio-system istio-ingressgateway -o jsonpath="{.spec.ports}" | jq
✅ 출력
[
{
"name": "status-port",
"nodePort": 32563,
"port": 15021,
"protocol": "TCP",
"targetPort": 15021
},
{
"name": "http2",
"nodePort": 30000,
"port": 80,
"protocol": "TCP",
"targetPort": 8080
},
{
"name": "https",
"nodePort": 30005,
"port": 443,
"protocol": "TCP",
"targetPort": 8443
}
]
VirtualService가 없을 때 모든 트래픽을 BlackHole(404)로 처리
docker exec -it myk8s-control-plane istioctl proxy-config routes deploy/istio-ingressgateway.istio-system -o json --name http.8080
✅ 출력
[
{
"name": "http.8080",
"virtualHosts": [
{
"name": "blackhole:80",
"domains": [
"*"
]
}
],
"validateClusters": false,
"ignorePortInHostMatching": true
}
]

cat ch4/coolstore-vs.yaml
✅ 출력
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: webapp-vs-from-gw
spec:
hosts:
- "webapp.istioinaction.io"
gateways:
- coolstore-gateway
http:
- route:
- destination:
host: webapp
port:
number: 80%
(1) istioinaction 네임스페이스에 VirtualService 생성
kubectl apply -n istioinaction -f ch4/coolstore-vs.yaml
# 결과
virtualservice.networking.istio.io/webapp-vs-from-gw created
(2) Gateway와 VirtualService 목록 조회
kubectl get gw,vs -n istioinaction
✅ 출력
NAME AGE
gateway.networking.istio.io/coolstore-gateway 10m
NAME GATEWAYS HOSTS AGE
virtualservice.networking.istio.io/webapp-vs-from-gw ["coolstore-gateway"] ["webapp.istioinaction.io"] 34s
docker exec -it myk8s-control-plane istioctl proxy-status
✅ 출력
NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION
istio-ingressgateway-996bc6bb6-ll8hl.istio-system Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-7df6ffc78d-jh6x2 1.17.8
docker exec -it myk8s-control-plane istioctl proxy-config routes deploy/istio-ingressgateway.istio-system
✅ 출력
NAME DOMAINS MATCH VIRTUAL SERVICE
http.8080 webapp.istioinaction.io /* webapp-vs-from-gw.istioinaction
* /healthz/ready*
* /stats/prometheus*
docker exec -it myk8s-control-plane istioctl proxy-config routes deploy/istio-ingressgateway.istio-system -o json --name http.8080
✅ 출력
[
{
"name": "http.8080",
"virtualHosts": [
{
"name": "webapp.istioinaction.io:80",
"domains": [
"webapp.istioinaction.io" #1 비교할 도메인
],
"routes": [ #2 라우팅 할 곳
{
"match": {
"prefix": "/"
},
"route": {
"cluster": "outbound|80||webapp.istioinaction.svc.cluster.local",
"timeout": "0s",
"retryPolicy": {
"retryOn": "connect-failure,refused-stream,unavailable,cancelled,retriable-status-codes",
"numRetries": 2,
"retryHostPredicate": [
{
"name": "envoy.retry_host_predicates.previous_hosts",
"typedConfig": {
"@type": "type.googleapis.com/envoy.extensions.retry.host.previous_hosts.v3.PreviousHostsPredicate"
}
}
],
"hostSelectionRetryMaxAttempts": "5",
"retriableStatusCodes": [
503
]
},
"maxGrpcTimeout": "0s"
},
"metadata": {
"filterMetadata": {
"istio": {
"config": "/apis/networking.istio.io/v1alpha3/namespaces/istioinaction/virtual-service/webapp-vs-from-gw"
}
}
},
"decorator": {
"operation": "webapp.istioinaction.svc.cluster.local:80/*"
}
}
],
"includeRequestAttemptCount": true
}
],
"validateClusters": false,
"ignorePortInHostMatching": true
}
]
실제 애플리케이션(서비스) 배포 전 Ingress Gateway의 Cluster 목록
docker exec -it myk8s-control-plane istioctl proxy-config cluster deploy/istio-ingressgateway.istio-system
✅ 출력
SERVICE FQDN PORT SUBSET DIRECTION TYPE DESTINATION RULE
BlackHoleCluster - - - STATIC
agent - - - STATIC
grafana.istio-system.svc.cluster.local 3000 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 80 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 443 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 15021 - outbound EDS
istiod.istio-system.svc.cluster.local 443 - outbound EDS
istiod.istio-system.svc.cluster.local 15010 - outbound EDS
istiod.istio-system.svc.cluster.local 15012 - outbound EDS
istiod.istio-system.svc.cluster.local 15014 - outbound EDS
jaeger-collector.istio-system.svc.cluster.local 9411 - outbound EDS
jaeger-collector.istio-system.svc.cluster.local 14250 - outbound EDS
jaeger-collector.istio-system.svc.cluster.local 14268 - outbound EDS
kiali.istio-system.svc.cluster.local 9090 - outbound EDS
kiali.istio-system.svc.cluster.local 20001 - outbound EDS
kube-dns.kube-system.svc.cluster.local 53 - outbound EDS
kube-dns.kube-system.svc.cluster.local 9153 - outbound EDS
kubernetes.default.svc.cluster.local 443 - outbound EDS
metrics-server.kube-system.svc.cluster.local 443 - outbound EDS
prometheus.istio-system.svc.cluster.local 9090 - outbound EDS
prometheus_stats - - - STATIC
sds-grpc - - - STATIC
tracing.istio-system.svc.cluster.local 80 - outbound EDS
tracing.istio-system.svc.cluster.local 16685 - outbound EDS
xds-grpc - - - STATIC
zipkin - - - STRICT_DNS
zipkin.istio-system.svc.cluster.local 9411 - outbound EDS
webapp과 catalog 클러스터가 아직 등록되지 않음(1) catalog 및 webapp 서비스 배포
kubectl apply -f services/catalog/kubernetes/catalog.yaml -n istioinaction
kubectl apply -f services/webapp/kubernetes/webapp.yaml -n istioinaction
# 결과
serviceaccount/catalog created
service/catalog created
deployment.apps/catalog created
serviceaccount/webapp created
service/webapp created
deployment.apps/webapp created
(2) 파드 확인
kubectl get pod -n istioinaction -owide
✅ 출력
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
catalog-6cf4b97d-zcn65 2/2 Running 0 27s 10.10.0.13 myk8s-control-plane <none> <none>
webapp-7685bcb84-2kjbl 2/2 Running 0 27s 10.10.0.14 myk8s-control-plane <none> <none>
(3) 프록시 동기화 확인
docker exec -it myk8s-control-plane istioctl proxy-status
✅ 출력
NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION
catalog-6cf4b97d-zcn65.istioinaction Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-7df6ffc78d-jh6x2 1.17.8
istio-ingressgateway-996bc6bb6-ll8hl.istio-system Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-7df6ffc78d-jh6x2 1.17.8
webapp-7685bcb84-2kjbl.istioinaction Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-7df6ffc78d-jh6x2 1.17.8
(4) 클러스터 정보 확인
docker exec -it myk8s-control-plane istioctl proxy-config cluster deploy/istio-ingressgateway.istio-system | egrep 'TYPE|istioinaction'
✅ 출력
SERVICE FQDN PORT SUBSET DIRECTION TYPE DESTINATION RULE
catalog.istioinaction.svc.cluster.local 80 - outbound EDS
webapp.istioinaction.svc.cluster.local 80 - outbound EDS
(5) Endpoint 정보 확인
docker exec -it myk8s-control-plane istioctl proxy-config endpoint deploy/istio-ingressgateway.istio-system | egrep 'ENDPOINT|istioinaction'
✅ 출력
ENDPOINT STATUS OUTLIER CHECK CLUSTER
10.10.0.13:3000 HEALTHY OK outbound|80||catalog.istioinaction.svc.cluster.local
10.10.0.14:8080 HEALTHY OK outbound|80||webapp.istioinaction.svc.cluster.local
(1) catalog 접속 확인
kubectl exec -it netshoot -- curl -s http://catalog.istioinaction/items/1 | jq
✅ 출력
{
"id": 1,
"color": "amber",
"department": "Eyewear",
"name": "Elinor Glasses",
"price": "282.00"
}
(2) webapp 접속 확인
kubectl exec -it netshoot -- curl -s http://webapp.istioinaction/api/catalog/items/1 | jq
✅ 출력
{
"id": 1,
"color": "amber",
"department": "Eyewear",
"name": "Elinor Glasses",
"price": "282.00"
}
(1) Ingress Gateway HTTP 로그 디버깅 모드로 전환
kubectl exec -it deploy/istio-ingressgateway -n istio-system -- curl -X POST http://localhost:15000/logging\?http\=debug
✅ 출력
active loggers:
admin: warning
alternate_protocols_cache: warning
aws: warning
assert: warning
backtrace: warning
cache_filter: warning
client: warning
config: warning
connection: warning
conn_handler: warning
decompression: warning
dns: warning
dubbo: warning
envoy_bug: warning
ext_authz: warning
ext_proc: warning
rocketmq: warning
file: warning
filter: warning
forward_proxy: warning
grpc: warning
happy_eyeballs: warning
hc: warning
health_checker: warning
http: debug
http2: warning
hystrix: warning
init: warning
io: warning
jwt: warning
kafka: warning
key_value_store: warning
lua: warning
main: warning
matcher: warning
misc: error
mongo: warning
multi_connection: warning
oauth2: warning
quic: warning
quic_stream: warning
pool: warning
rate_limit_quota: warning
rbac: warning
rds: warning
redis: warning
router: warning
runtime: warning
stats: warning
secret: warning
tap: warning
testing: warning
thrift: warning
tracing: warning
upstream: warning
udp: warning
wasm: warning
websocket: warning
(2) 실시간 로그 확인
kubectl logs -n istio-system -l app=istio-ingressgateway -f
(3) 잘못된 Host 헤더로 호출 → 404
curl http://localhost:30000/api/catalog -v
✅ 출력
* Host localhost:30000 was resolved.
* IPv6: ::1
* IPv4: 127.0.0.1
* Trying [::1]:30000...
* connect to ::1 port 30000 from ::1 port 47290 failed: Connection refused
* Trying 127.0.0.1:30000...
* Connected to localhost (127.0.0.1) port 30000
* using HTTP/1.x
> GET /api/catalog HTTP/1.1
> Host: localhost:30000
> User-Agent: curl/8.13.0
> Accept: */*
>
* Request completely sent off
< HTTP/1.1 404 Not Found
< date: Sat, 19 Apr 2025 14:07:08 GMT
< server: istio-envoy
< content-length: 0
<
* Connection #0 to host localhost left intact
(4) 올바른 Host 헤더로 호출 → 정상 응답
curl -s http://localhost:30000/api/catalog -H "Host: webapp.istioinaction.io" | jq
✅ 출력
[
{
"id": 1,
"color": "amber",
"department": "Eyewear",
"name": "Elinor Glasses",
"price": "282.00"
},
{
"id": 2,
"color": "cyan",
"department": "Clothing",
"name": "Atlas Shirt",
"price": "127.00"
},
{
"id": 3,
"color": "teal",
"department": "Clothing",
"name": "Small Metal Shoes",
"price": "232.00"
},
{
"id": 4,
"color": "red",
"department": "Watches",
"name": "Red Dragon Watch",
"price": "232.00"
}
]
docker exec -it myk8s-control-plane istioctl proxy-config secret deploy/istio-ingressgateway.istio-system
✅ 출력
RESOURCE NAME TYPE STATUS VALID CERT SERIAL NUMBER NOT AFTER NOT BEFORE
default Cert Chain ACTIVE true 123240359314213262205135588074267083810 2025-04-20T12:24:34Z 2025-04-19T12:22:34Z
ROOTCA CA ACTIVE true 148534163974034505686054778049611152978 2035-04-17T12:24:23Z 2025-04-19T12:24:23Z
(1) 비공개 키 출력
cat ch4/certs/3_application/private/webapp.istioinaction.io.key.pem
✅ 출력
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEAr9h0Mp2CFavJPi4kNBHVqN5XhMem2w+L3n3gLFZw8kLu5v9i
q1IxLNWgyBZ/9mWoJZDbZ0GuYWQA/nYCw4cq3ZB5bdkjDoSHXt1Tfs5LJgRsXkI4
pjPjuW9TcFdqzXytQvVD/Qs93Kn/kXk12IgiQzmPSQhcd1RQoqvUVljmZ4bTh94k
BCmP+S3qD3PbwCtwZRtW61AfveMPeHibd3clrLbtxjaoMzd293xB/MM27eNvw1im
wFPZty7jQQMtDmU4LRhEmB3b080sj9SmkRGz/raspmr51Vc3/o+6daTMmD96q4D2
2+QTM/FgkZ7hgbu1wO5RwO7aZ3MbJG/fJHVRdwIDAQABAoIBAEpIcQW0vfAzqoam
7UpFwnFcw7HmuVjO33I00I9KUNo2Zj+U4PSoeveKoyoDPzkyRm7gG58qAuVHXpgf
+BjrL7N7RaCe2o1WdO0hKBVoRhygP7st1Ep5nxiFq8TIWOjHY1Xm0DrEFfTyp3Cn
uJRpJbgqR5o9evo51vpxBfkYAvT2LvjSYXMgBnwkcWWane9JK/cBofapNDODWMj7
cclkXQ22aoPzqmfcnRIquVz6uYRv45qU51FnxnhQV2FQik0e3DrNc9+7J2Bipezd
DogIOT4nbH4trLP7l6YxOf6ZWupRu0JHNyeoPDRFYJlvwJttUXz/QUwfDS1RPojI
aTFlueECgYEA5TBpu87nHaC7VoY4LUhS8eBkuOFn3v3wrwgbcwqDn2/hitj33YB9
T6I7bbZ8PozXF1iaC8/4QOVQlmhlCYQTRCBUk0ZKs22C987gXohhAmku8p+7Hgvo
2ObEftG0OzJ0j/vwzit2uYW6ZqP/fx6GUDmMhWyzo2gJxmYBiSXgwhECgYEAxGqL
2Gxua1D6DCSLLmZSZMFEAwCyH3kOGvkpAfNoqOdLpugL/s8IHWBLwQ2QtaSQshbC
eKrGLX2J3EccB4s77Itk6kKfaUesVyNeda9buxPA1FZRz0BQy0iShSU+swpyKOhy
EMmCpa0NswsW03AzfhXs1ONeruFRdPpvPgmc0wcCgYAq0mnfCmCKW57FItzaMRo2
UTvgg1UaCA5xVa1zSDKhlpDolXNycnB3cZNzA1ahhUUm+ooFzPzQe0gcYjMGnSPQ
Zc4HmmmYrsx6qq+nWgnuHmMEOC4JBiaiaDOsklf/e4Tl5ifvDZXoQgE67kdto/Fq
ieYkg9Pooya4aBS/YFFnUQKBgEVJzMFxJtamvz6vWYXpxKEUaHiiszNVEfvD74pn
ooEK7u4XJ7wgrp0mTjLxJR5eykh4rOvCWpzLj2lskF+850u/tL7K98884Hfw2y6q
yLJK+pgtRzjUWGwN0tozVFX2lmUF8s8nNvZZAN8rR0cZaqDM/TnwZ4NLqt+YRMve
ujrbAoGBAL04BzlJBMtoT9lRMJsveiT364Eg3PKnF+b4FZctLDC64wAZ04b7Mm4P
wfz+Xt6wjorvbj3XdWbgQNq/FBkeeqFXB88wC/WZSmyzN8gQaSXS8qSAMpKzoOXw
KSgYWGmCLTKckavPKhMKps2wpY848gImQVK1DTnCO04+xPOb2mnz
-----END RSA PRIVATE KEY-----
(2) 서버 인증서 출력 및 검사
cat ch4/certs/3_application/certs/webapp.istioinaction.io.cert.pem
✅ 출력
-----BEGIN CERTIFICATE-----
MIIFXzCCA0egAwIBAgIDEAISMA0GCSqGSIb3DQEBCwUAME4xCzAJBgNVBAYTAlVT
MQ8wDQYDVQQIDAZEZW5pYWwxDDAKBgNVBAoMA0RpczEgMB4GA1UEAwwXd2ViYXBw
LmlzdGlvaW5hY3Rpb24uaW8wHhcNMjEwNzA0MTI0OTMyWhcNNDEwNjI5MTI0OTMy
WjBkMQswCQYDVQQGEwJVUzEPMA0GA1UECAwGRGVuaWFsMRQwEgYDVQQHDAtTcHJp
bmdmaWVsZDEMMAoGA1UECgwDRGlzMSAwHgYDVQQDDBd3ZWJhcHAuaXN0aW9pbmFj
dGlvbi5pbzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK/YdDKdghWr
yT4uJDQR1ajeV4THptsPi9594CxWcPJC7ub/YqtSMSzVoMgWf/ZlqCWQ22dBrmFk
AP52AsOHKt2QeW3ZIw6Eh17dU37OSyYEbF5COKYz47lvU3BXas18rUL1Q/0LPdyp
/5F5NdiIIkM5j0kIXHdUUKKr1FZY5meG04feJAQpj/kt6g9z28ArcGUbVutQH73j
D3h4m3d3Jay27cY2qDM3dvd8QfzDNu3jb8NYpsBT2bcu40EDLQ5lOC0YRJgd29PN
LI/UppERs/62rKZq+dVXN/6PunWkzJg/equA9tvkEzPxYJGe4YG7tcDuUcDu2mdz
GyRv3yR1UXcCAwEAAaOCAS4wggEqMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQD
AgZAMDMGCWCGSAGG+EIBDQQmFiRPcGVuU1NMIEdlbmVyYXRlZCBTZXJ2ZXIgQ2Vy
dGlmaWNhdGUwHQYDVR0OBBYEFIcOXqRMpVfFbZeVZMR9YB67B5T0MIGQBgNVHSME
gYgwgYWAFLnzhAgiNyzTdRjSB8RvTmepDH0UoWikZjBkMQswCQYDVQQGEwJVUzEP
MA0GA1UECAwGRGVuaWFsMRQwEgYDVQQHDAtTcHJpbmdmaWVsZDEMMAoGA1UECgwD
RGlzMSAwHgYDVQQDDBd3ZWJhcHAuaXN0aW9pbmFjdGlvbi5pb4IDEAISMA4GA1Ud
DwEB/wQEAwIFoDATBgNVHSUEDDAKBggrBgEFBQcDATANBgkqhkiG9w0BAQsFAAOC
AgEARmCzAuNyZRgtzUhYKVoZJHk5TzzGxqNzXsiuv6b/TvT+TQvJPzlbQfy+/XLN
edFIvXJFLrB/6jb/Mdnz/B+ctfqXVlRBE3h7cQz+fUyb2/t4qzOlaZZEXCJy62xc
8UwQIFbU5NOK3pNjE3aCGoiu9db/TSdGyYQqsIjqfaMTrEwFF6s4zM0StrTF+VQu
+1YP6cwCqJdD+cvQrhe1fPEILbg/6foQ0ugEQMbUeKHerq3wa+NQ1Buc2YOs6DjZ
Yi1JGmiost1I8ul96NbMDIz8QlSoIIuUVYgUhZfMjAWRH85dJZckEpu43RxuYuO7
lEVLkvEIYKIdt+d7Ae8NWT0v5G/nG/T2Sb6aJ+lzT+kmaYMYW7aaJ7kDX14ETK4g
ISRGbLVfBF4uYLlBWY0hxvdVnHqQIcrnxlvnZXlmLbYejcdiCAr8BImMHYo4BJ3X
YaAwAU8ZSma/pPvgf9h1KMKA0Fhb8ijJ5s60lXBvLRgzOz1eB8yVgApWFTjQ+hRk
pM9Poo5OVoIYuI4SzHlBl1df2wGCmW6h+7OlJcn7m7ZYVO0/QYsie5sl1nqrRU9y
28ZfATe9HnZETCOORCro8LD1mDHN9pNqR8acG0s1cRnsNyTbdIRJ4Pxg4K71t2rs
NelTeXRTAz2iM7x5jxzzTa1Sv7T4TCHbuiUepUeYOdBVFjg=
-----END CERTIFICATE-----
openssl x509 -in ch4/certs/3_application/certs/webapp.istioinaction.io.cert.pem -noout -text
✅ 출력
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 1049106 (0x100212)
Signature Algorithm: sha256WithRSAEncryption
Issuer: C=US, ST=Denial, O=Dis, CN=webapp.istioinaction.io
Validity
Not Before: Jul 4 12:49:32 2021 GMT
Not After : Jun 29 12:49:32 2041 GMT
Subject: C=US, ST=Denial, L=Springfield, O=Dis, CN=webapp.istioinaction.io
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:af:d8:74:32:9d:82:15:ab:c9:3e:2e:24:34:11:
d5:a8:de:57:84:c7:a6:db:0f:8b:de:7d:e0:2c:56:
70:f2:42:ee:e6:ff:62:ab:52:31:2c:d5:a0:c8:16:
7f:f6:65:a8:25:90:db:67:41:ae:61:64:00:fe:76:
02:c3:87:2a:dd:90:79:6d:d9:23:0e:84:87:5e:dd:
53:7e:ce:4b:26:04:6c:5e:42:38:a6:33:e3:b9:6f:
53:70:57:6a:cd:7c:ad:42:f5:43:fd:0b:3d:dc:a9:
ff:91:79:35:d8:88:22:43:39:8f:49:08:5c:77:54:
50:a2:ab:d4:56:58:e6:67:86:d3:87:de:24:04:29:
8f:f9:2d:ea:0f:73:db:c0:2b:70:65:1b:56:eb:50:
1f:bd:e3:0f:78:78:9b:77:77:25:ac:b6:ed:c6:36:
a8:33:37:76:f7:7c:41:fc:c3:36:ed:e3:6f:c3:58:
a6:c0:53:d9:b7:2e:e3:41:03:2d:0e:65:38:2d:18:
44:98:1d:db:d3:cd:2c:8f:d4:a6:91:11:b3:fe:b6:
ac:a6:6a:f9:d5:57:37:fe:8f:ba:75:a4:cc:98:3f:
7a:ab:80:f6:db:e4:13:33:f1:60:91:9e:e1:81:bb:
b5:c0:ee:51:c0:ee:da:67:73:1b:24:6f:df:24:75:
51:77
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Basic Constraints:
CA:FALSE
Netscape Cert Type:
SSL Server
Netscape Comment:
OpenSSL Generated Server Certificate
X509v3 Subject Key Identifier:
87:0E:5E:A4:4C:A5:57:C5:6D:97:95:64:C4:7D:60:1E:BB:07:94:F4
X509v3 Authority Key Identifier:
keyid:B9:F3:84:08:22:37:2C:D3:75:18:D2:07:C4:6F:4E:67:A9:0C:7D:14
DirName:/C=US/ST=Denial/L=Springfield/O=Dis/CN=webapp.istioinaction.io
serial:10:02:12
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Server Authentication
Signature Algorithm: sha256WithRSAEncryption
Signature Value:
46:60:b3:02:e3:72:65:18:2d:cd:48:58:29:5a:19:24:79:39:
4f:3c:c6:c6:a3:73:5e:c8:ae:bf:a6:ff:4e:f4:fe:4d:0b:c9:
3f:39:5b:41:fc:be:fd:72:cd:79:d1:48:bd:72:45:2e:b0:7f:
ea:36:ff:31:d9:f3:fc:1f:9c:b5:fa:97:56:54:41:13:78:7b:
71:0c:fe:7d:4c:9b:db:fb:78:ab:33:a5:69:96:44:5c:22:72:
eb:6c:5c:f1:4c:10:20:56:d4:e4:d3:8a:de:93:63:13:76:82:
1a:88:ae:f5:d6:ff:4d:27:46:c9:84:2a:b0:88:ea:7d:a3:13:
ac:4c:05:17:ab:38:cc:cd:12:b6:b4:c5:f9:54:2e:fb:56:0f:
e9:cc:02:a8:97:43:f9:cb:d0:ae:17:b5:7c:f1:08:2d:b8:3f:
e9:fa:10:d2:e8:04:40:c6:d4:78:a1:de:ae:ad:f0:6b:e3:50:
d4:1b:9c:d9:83:ac:e8:38:d9:62:2d:49:1a:68:a8:b2:dd:48:
f2:e9:7d:e8:d6:cc:0c:8c:fc:42:54:a8:20:8b:94:55:88:14:
85:97:cc:8c:05:91:1f:ce:5d:25:97:24:12:9b:b8:dd:1c:6e:
62:e3:bb:94:45:4b:92:f1:08:60:a2:1d:b7:e7:7b:01:ef:0d:
59:3d:2f:e4:6f:e7:1b:f4:f6:49:be:9a:27:e9:73:4f:e9:26:
69:83:18:5b:b6:9a:27:b9:03:5f:5e:04:4c:ae:20:21:24:46:
6c:b5:5f:04:5e:2e:60:b9:41:59:8d:21:c6:f7:55:9c:7a:90:
21:ca:e7:c6:5b:e7:65:79:66:2d:b6:1e:8d:c7:62:08:0a:fc:
04:89:8c:1d:8a:38:04:9d:d7:61:a0:30:01:4f:19:4a:66:bf:
a4:fb:e0:7f:d8:75:28:c2:80:d0:58:5b:f2:28:c9:e6:ce:b4:
95:70:6f:2d:18:33:3b:3d:5e:07:cc:95:80:0a:56:15:38:d0:
fa:14:64:a4:cf:4f:a2:8e:4e:56:82:18:b8:8e:12:cc:79:41:
97:57:5f:db:01:82:99:6e:a1:fb:b3:a5:25:c9:fb:9b:b6:58:
54:ed:3f:41:8b:22:7b:9b:25:d6:7a:ab:45:4f:72:db:c6:5f:
01:37:bd:1e:76:44:4c:23:8e:44:2a:e8:f0:b0:f5:98:31:cd:
f6:93:6a:47:c6:9c:1b:4b:35:71:19:ec:37:24:db:74:84:49:
e0:fc:60:e0:ae:f5:b7:6a:ec:35:e9:53:79:74:53:03:3d:a2:
33:bc:79:8f:1c:f3:4d:ad:52:bf:b4:f8:4c:21:db:ba:25:1e:
a5:47:98:39:d0:55:16:38
kubectl create -n istio-system secret tls webapp-credential \
--key ch4/certs/3_application/private/webapp.istioinaction.io.key.pem \
--cert ch4/certs/3_application/certs/webapp.istioinaction.io.cert.pem
# 결과
secret/webapp-credential created
(1) 리소스 확인 (coolstore-gw-tls.yaml)
cat ch4/coolstore-gw-tls.yaml
✅ 출력
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: coolstore-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80 #1 HTTP 트래픽 허용
name: http
protocol: HTTP
hosts:
- "webapp.istioinaction.io"
- port:
number: 443 #2 HTTPS 트래픽 허용
name: https
protocol: HTTPS
tls:
mode: SIMPLE #3 보안 연결
credentialName: webapp-credential #4 TLS 인증서가 들어 있는 쿠버네티스 시크릿 이름
hosts:
- "webapp.istioinaction.io"
(2) 적용
kubectl apply -f ch4/coolstore-gw-tls.yaml -n istioinaction
# 결과
gateway.networking.istio.io/coolstore-gateway configured
(3) Gateway 구성 변경 확인
kubernetes://webapp-credential 체인이 ACTIVE 상태로 로드됨ㄴ
docker exec -it myk8s-control-plane istioctl proxy-config secret deploy/istio-ingressgateway.istio-system
✅ 출력
RESOURCE NAME TYPE STATUS VALID CERT SERIAL NUMBER NOT AFTER NOT BEFORE
default Cert Chain ACTIVE true 123240359314213262205135588074267083810 2025-04-20T12:24:34Z 2025-04-19T12:22:34Z
kubernetes://webapp-credential Cert Chain ACTIVE true 1049106 2041-06-29T12:49:32Z 2021-07-04T12:49:32Z
ROOTCA CA ACTIVE true 148534163974034505686054778049611152978 2035-04-17T12:24:23Z 2025-04-19T12:24:23Z
curl -v -H "Host: webapp.istioinaction.io" https://localhost:30005/api/catalog
✅ 출력
* Host localhost:30005 was resolved.
* IPv6: ::1
* IPv4: 127.0.0.1
* Trying [::1]:30005...
* connect to ::1 port 30005 from ::1 port 55138 failed: Connection refused
* Trying 127.0.0.1:30005...
* ALPN: curl offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* CAfile: /etc/ssl/certs/ca-certificates.crt
* CApath: none
* TLS connect error: error:00000000:lib(0)::reason(0)
* OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to localhost:30005
* closing connection #0
curl: (35) TLS connect error: error:00000000:lib(0)::reason(0)
TLS connect error: SSL_ERROR_SYSCALL 발생kubectl exec -it deploy/istio-ingressgateway -n istio-system -- ls -l /etc/ssl/certs
✅ 출력
...
lrwxrwxrwx 1 root root 15 Oct 4 2023 a3418fda.0 -> GTS_Root_R4.pem
lrwxrwxrwx 1 root root 13 Oct 4 2023 a94d09e5.0 -> ACCVRAIZ1.pem
lrwxrwxrwx 1 root root 45 Oct 4 2023 aee5f10d.0 -> Entrust.net_Premium_2048_Secure_Server_CA.pem
lrwxrwxrwx 1 root root 31 Oct 4 2023 b0e59380.0 -> GlobalSign_ECC_Root_CA_-_R4.pem
lrwxrwxrwx 1 root root 31 Oct 4 2023 b1159c4c.0 -> DigiCert_Assured_ID_Root_CA.pem
lrwxrwxrwx 1 root root 29 Oct 4 2023 b433981b.0 -> ANF_Secure_Server_Root_CA.pem
lrwxrwxrwx 1 root root 20 Oct 4 2023 b66938e9.0 -> Secure_Global_CA.pem
lrwxrwxrwx 1 root root 23 Oct 4 2023 b727005e.0 -> AffirmTrust_Premium.pem
lrwxrwxrwx 1 root root 37 Oct 4 2023 b7a5b843.0 -> TWCA_Root_Certification_Authority.pem
lrwxrwxrwx 1 root root 39 Oct 4 2023 b81b93f0.0 -> AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem
lrwxrwxrwx 1 root root 49 Oct 4 2023 bf53fb88.0 -> Microsoft_RSA_Root_Certificate_Authority_2017.pem
lrwxrwxrwx 1 root root 22 Oct 4 2023 c01eb047.0 -> UCA_Global_G2_Root.pem
lrwxrwxrwx 1 root root 34 Oct 4 2023 c28a8a30.0 -> D-TRUST_Root_Class_3_CA_2_2009.pem
-rw-r--r-- 1 root root 208567 Oct 4 2023 ca-certificates.crt
lrwxrwxrwx 1 root root 37 Oct 4 2023 ca6e4ad9.0 -> ePKI_Root_Certification_Authority.pem
lrwxrwxrwx 1 root root 44 Oct 4 2023 cbf06781.0 -> Go_Daddy_Root_Certificate_Authority_-_G2.pem
lrwxrwxrwx 1 root root 14 Oct 4 2023 cc450945.0 -> Izenpe.com.pem
lrwxrwxrwx 1 root root 34 Oct 4 2023 cd58d51e.0 -> Security_Communication_RootCA2.pem
lrwxrwxrwx 1 root root 20 Oct 4 2023 cd8c0d63.0 -> AC_RAIZ_FNMT-RCM.pem
lrwxrwxrwx 1 root root 20 Oct 4 2023 ce5e74ef.0 -> Amazon_Root_CA_1.pem
lrwxrwxrwx 1 root root 55 Oct 4 2023 certSIGN_ROOT_CA.pem -> /usr/share/ca-certificates/mozilla/certSIGN_ROOT_CA.crt
lrwxrwxrwx 1 root root 58 Oct 4 2023 certSIGN_Root_CA_G2.pem -> /usr/share/ca-certificates/mozilla/certSIGN_Root_CA_G2.crt
lrwxrwxrwx 1 root root 37 Oct 4 2023 d4dae3dd.0 -> D-TRUST_Root_Class_3_CA_2_EV_2009.pem
lrwxrwxrwx 1 root root 32 Oct 4 2023 d52c538d.0 -> DigiCert_TLS_RSA4096_Root_G5.pem
lrwxrwxrwx 1 root root 38 Oct 4 2023 d6325660.0 -> COMODO_RSA_Certification_Authority.pem
lrwxrwxrwx 1 root root 22 Oct 4 2023 d7e8dc79.0 -> QuoVadis_Root_CA_2.pem
lrwxrwxrwx 1 root root 53 Oct 4 2023 d887a5bb.0 -> Trustwave_Global_ECC_P384_Certification_Authority.pem
lrwxrwxrwx 1 root root 27 Oct 4 2023 dc4d6a89.0 -> GlobalSign_Root_CA_-_R6.pem
lrwxrwxrwx 1 root root 27 Oct 4 2023 dd8e9d41.0 -> DigiCert_Global_Root_G3.pem
lrwxrwxrwx 1 root root 20 Oct 4 2023 de6d66f3.0 -> Amazon_Root_CA_4.pem
lrwxrwxrwx 1 root root 60 Oct 4 2023 e-Szigno_Root_CA_2017.pem -> /usr/share/ca-certificates/mozilla/e-Szigno_Root_CA_2017.crt
lrwxrwxrwx 1 root root 12 Oct 4 2023 e113c810.0 -> Certigna.pem
lrwxrwxrwx 1 root root 25 Oct 4 2023 e18bfb83.0 -> QuoVadis_Root_CA_3_G3.pem
lrwxrwxrwx 1 root root 26 Oct 4 2023 e35234b1.0 -> Certum_Trusted_Root_CA.pem
lrwxrwxrwx 1 root root 25 Oct 4 2023 e36a6752.0 -> Atos_TrustedRoot_2011.pem
lrwxrwxrwx 1 root root 35 Oct 4 2023 e73d606e.0 -> OISTE_WISeKey_Global_Root_GB_CA.pem
lrwxrwxrwx 1 root root 25 Oct 4 2023 e868b802.0 -> e-Szigno_Root_CA_2017.pem
lrwxrwxrwx 1 root root 27 Oct 4 2023 e8de2f56.0 -> Buypass_Class_3_Root_CA.pem
lrwxrwxrwx 1 root root 72 Oct 4 2023 ePKI_Root_Certification_Authority.pem -> /usr/share/ca-certificates/mozilla/ePKI_Root_Certification_Authority.crt
lrwxrwxrwx 1 root root 31 Oct 4 2023 ecccd8db.0 -> HARICA_TLS_ECC_Root_CA_2021.pem
lrwxrwxrwx 1 root root 21 Oct 4 2023 ed858448.0 -> vTrus_ECC_Root_CA.pem
lrwxrwxrwx 1 root root 28 Oct 4 2023 ee64a828.0 -> Comodo_AAA_Services_root.pem
lrwxrwxrwx 1 root root 38 Oct 4 2023 eed8c118.0 -> COMODO_ECC_Certification_Authority.pem
lrwxrwxrwx 1 root root 34 Oct 4 2023 ef954a4e.0 -> IdenTrust_Commercial_Root_CA_1.pem
lrwxrwxrwx 1 root root 62 Oct 4 2023 emSign_ECC_Root_CA_-_C3.pem -> /usr/share/ca-certificates/mozilla/emSign_ECC_Root_CA_-_C3.crt
lrwxrwxrwx 1 root root 62 Oct 4 2023 emSign_ECC_Root_CA_-_G3.pem -> /usr/share/ca-certificates/mozilla/emSign_ECC_Root_CA_-_G3.crt
lrwxrwxrwx 1 root root 58 Oct 4 2023 emSign_Root_CA_-_C1.pem -> /usr/share/ca-certificates/mozilla/emSign_Root_CA_-_C1.crt
lrwxrwxrwx 1 root root 58 Oct 4 2023 emSign_Root_CA_-_G1.pem -> /usr/share/ca-certificates/mozilla/emSign_Root_CA_-_G1.crt
lrwxrwxrwx 1 root root 23 Oct 4 2023 f081611a.0 -> Go_Daddy_Class_2_CA.pem
lrwxrwxrwx 1 root root 47 Oct 4 2023 f0c70a8d.0 -> SSL.com_EV_Root_Certification_Authority_ECC.pem
lrwxrwxrwx 1 root root 44 Oct 4 2023 f249de83.0 -> Trustwave_Global_Certification_Authority.pem
lrwxrwxrwx 1 root root 41 Oct 4 2023 f30dd6ad.0 -> USERTrust_ECC_Certification_Authority.pem
lrwxrwxrwx 1 root root 34 Oct 4 2023 f3377b1b.0 -> Security_Communication_Root_CA.pem
lrwxrwxrwx 1 root root 24 Oct 4 2023 f387163d.0 -> Starfield_Class_2_CA.pem
lrwxrwxrwx 1 root root 18 Oct 4 2023 f39fc864.0 -> SecureTrust_CA.pem
lrwxrwxrwx 1 root root 20 Oct 4 2023 f51bb24c.0 -> Certigna_Root_CA.pem
lrwxrwxrwx 1 root root 20 Oct 4 2023 fa5da96b.0 -> GLOBALTRUST_2020.pem
lrwxrwxrwx 1 root root 41 Oct 4 2023 fc5a8f99.0 -> USERTrust_RSA_Certification_Authority.pem
lrwxrwxrwx 1 root root 20 Oct 4 2023 fd64f3fc.0 -> TunTrust_Root_CA.pem
lrwxrwxrwx 1 root root 19 Oct 4 2023 fe8a2cd8.0 -> SZAFIR_ROOT_CA2.pem
lrwxrwxrwx 1 root root 23 Oct 4 2023 feffd413.0 -> GlobalSign_Root_E46.pem
lrwxrwxrwx 1 root root 49 Oct 4 2023 ff34af3f.0 -> TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem
lrwxrwxrwx 1 root root 56 Oct 4 2023 vTrus_ECC_Root_CA.pem -> /usr/share/ca-certificates/mozilla/vTrus_ECC_Root_CA.crt
lrwxrwxrwx 1 root root 52 Oct 4 2023 vTrus_Root_CA.pem -> /usr/share/ca-certificates/mozilla/vTrus_Root_CA.crt
openssl x509 -in ch4/certs/2_intermediate/certs/ca-chain.cert.pem -noout -text
✅ 출력
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 1049106 (0x100212)
Signature Algorithm: sha256WithRSAEncryption
Issuer: C=US, ST=Denial, L=Springfield, O=Dis, CN=webapp.istioinaction.io
Validity
Not Before: Jul 4 12:49:29 2021 GMT
Not After : Jun 29 12:49:29 2041 GMT
Subject: C=US, ST=Denial, O=Dis, CN=webapp.istioinaction.io
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (4096 bit)
Modulus:
00:c9:5f:92:1e:92:1d:a4:7c:2b:07:81:44:8b:a5:
5a:09:47:54:25:b8:7a:03:0a:f3:34:84:ed:91:94:
c4:50:ce:8d:c9:ef:30:2b:f9:a7:72:ca:57:f6:02:
63:af:64:e4:9e:8d:54:ea:fa:c2:9a:a4:b5:b3:0e:
ae:2a:5a:12:c4:3a:22:44:2f:a4:73:33:8f:52:10:
11:e7:c6:cf:c7:75:32:cd:f6:b0:e9:43:73:f9:48:
c7:dd:e9:e4:29:2c:82:07:7a:9e:bd:30:4c:7e:16:
12:b8:89:b8:9d:d6:cd:37:98:98:53:65:24:cb:75:
99:37:39:76:39:0a:75:c1:48:58:45:b6:ae:41:0d:
ee:2d:74:f5:a3:5e:71:44:b8:88:f8:54:b2:ba:19:
12:90:88:fd:9d:67:f8:67:ea:d9:db:0e:00:f7:1f:
ac:7b:58:f8:aa:30:27:13:21:ae:e7:1c:39:1a:53:
b7:45:71:50:a3:af:49:b8:85:3e:da:80:93:24:de:
41:b5:07:34:ca:52:52:1f:e6:d9:25:9b:63:99:98:
2f:09:fc:93:2d:95:ef:36:98:d2:6b:78:e8:2e:8e:
c4:d3:53:db:d9:ae:2f:95:82:49:46:c2:4f:77:e8:
36:8d:ba:69:91:b5:09:2a:a7:96:07:b8:32:da:04:
5e:5c:5e:c6:03:c7:77:12:6b:23:68:ff:e8:d5:02:
72:a1:d3:42:d5:34:fc:11:33:ef:37:03:ec:36:ba:
eb:ae:f1:4d:31:21:f4:b3:08:26:83:ca:98:44:10:
12:f8:18:5a:56:f1:ed:0c:17:40:62:e4:56:b5:b0:
86:59:e2:ca:84:30:29:f0:72:dc:2a:f8:ca:15:63:
ad:05:c3:a3:dc:13:0e:86:63:b3:b7:ad:0d:76:85:
5d:02:16:d3:3f:26:14:78:92:40:b4:f8:e5:ce:ef:
3a:44:fd:cb:c7:05:c2:81:0c:84:54:0e:15:b0:02:
00:4f:18:e0:8f:bb:02:f3:54:3a:84:7c:09:5c:82:
4a:de:9b:25:b7:42:84:40:16:56:cf:8c:cb:e6:40:
18:23:4f:eb:8e:d6:d0:ae:1c:34:ef:da:1c:ba:7e:
60:d8:f2:1b:f8:90:3d:02:1b:19:6e:69:12:ff:39:
21:2a:9c:08:64:17:09:39:f7:57:ed:41:5d:cf:28:
f0:b9:d7:19:9b:02:ec:6f:5b:1b:66:24:f4:4f:21:
31:4a:9d:b0:a9:2c:a5:93:34:05:8f:3e:88:6d:aa:
c2:5f:b8:a1:df:3b:e9:31:64:42:8c:a0:65:fe:53:
87:6a:c1:d7:85:a9:99:9e:62:b0:da:18:d2:40:17:
b9:bb:e9
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Subject Key Identifier:
B9:F3:84:08:22:37:2C:D3:75:18:D2:07:C4:6F:4E:67:A9:0C:7D:14
X509v3 Authority Key Identifier:
71:EB:8B:A0:49:F3:B2:13:60:82:4B:10:18:8E:C8:85:73:3E:D0:69
X509v3 Basic Constraints: critical
CA:TRUE, pathlen:0
X509v3 Key Usage: critical
Digital Signature, Certificate Sign, CRL Sign
Signature Algorithm: sha256WithRSAEncryption
Signature Value:
74:53:8d:12:30:17:53:7f:08:03:f0:f6:81:a6:f8:14:21:31:
ab:eb:0e:6c:5a:0e:3b:05:18:96:75:30:3d:45:33:9a:c0:7f:
ea:b5:7f:2b:a6:df:ef:28:66:db:30:25:a0:70:45:c5:a4:38:
17:fa:de:5d:79:91:57:bf:be:bc:fd:7a:77:ba:f4:53:5c:ea:
c6:2c:15:a7:53:84:22:81:52:ae:43:5e:4e:e9:cc:10:21:08:
04:da:e6:c6:77:27:bc:20:8d:cf:fe:b9:e0:c5:43:09:37:d5:
07:29:a1:32:d1:cd:6e:45:5e:3a:ad:22:b6:29:92:e0:81:42:
8e:01:d2:3b:16:49:8d:6f:1e:b5:c0:e3:d9:75:29:91:05:ee:
3f:20:07:9b:d2:06:27:15:e5:4a:78:5b:cd:71:db:55:ba:62:
47:b5:3c:76:6a:8c:ba:12:7e:df:b4:73:88:fa:72:71:36:cd:
ef:93:d6:62:1b:ea:61:bc:56:6d:bd:b8:f5:7b:70:a0:6d:29:
92:33:fc:79:f9:bb:3e:bd:a8:df:a7:b2:3a:e4:bd:5b:fe:33:
1d:9e:26:e5:94:34:5d:e1:39:42:15:77:1c:3d:05:80:0d:28:
72:7e:f7:f7:83:05:33:5d:12:cb:8c:e3:e3:5e:c5:23:3c:89:
fc:c6:7c:c4:33:b1:ef:31:80:6c:6d:e7:c0:eb:18:d6:e5:aa:
23:72:19:45:ba:70:79:5a:cd:ac:57:e2:d2:39:9f:ff:b3:c8:
e0:a5:bd:5d:07:90:e8:c3:88:63:94:2c:26:18:f6:80:9c:86:
62:c6:f0:fc:4c:2b:88:51:e8:a6:79:b0:fd:c0:a1:93:e0:b9:
98:da:7a:65:41:ba:fa:23:2d:58:9b:2b:52:6a:c1:66:a6:06:
2f:af:4d:1b:be:b4:b4:e2:65:37:6b:c5:4c:20:f9:a2:67:8a:
a6:c3:50:5b:ba:9f:65:29:96:d4:c2:82:2b:3e:67:cb:29:3e:
db:5a:f2:d1:5f:56:24:05:06:4d:10:27:83:19:46:da:49:bc:
ab:38:a5:cb:d8:89:0f:ee:57:57:9a:0e:c5:d5:11:bd:d8:1a:
68:97:39:c0:7e:6c:56:ec:90:72:f9:78:9a:a5:c6:0b:b3:b9:
ab:bc:10:43:4a:e6:09:02:f3:10:ca:57:9c:a5:c3:60:49:c4:
ac:8a:23:f2:03:56:96:11:15:a9:ae:79:64:b6:fc:cf:99:db:
9e:32:83:b1:65:cd:33:44:c5:d5:92:fa:9a:0f:1d:50:5a:12:
f0:d3:ed:e0:0b:c6:c3:13:02:13:c3:57:ec:4c:fe:b8:85:bd:
13:67:e0:e8:0c:36:e4:0e
curl -v -H "Host: webapp.istioinaction.io" https://localhost:30005/api/catalog \
--cacert ch4/certs/2_intermediate/certs/ca-chain.cert.pem
✅ 출력
* Host localhost:30005 was resolved.
* IPv6: ::1
* IPv4: 127.0.0.1
* Trying [::1]:30005...
* connect to ::1 port 30005 from ::1 port 37470 failed: Connection refused
* Trying 127.0.0.1:30005...
* ALPN: curl offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* CAfile: ch4/certs/2_intermediate/certs/ca-chain.cert.pem
* CApath: none
* TLS connect error: error:00000000:lib(0)::reason(0)
* OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to localhost:30005
* closing connection #0
curl: (35) TLS connect error: error:00000000:lib(0)::reason(0)
TLS connect error: SSL_ERROR_SYSCALL 발생echo "127.0.0.1 webapp.istioinaction.io" | sudo tee -a /etc/hosts
cat /etc/hosts | tail -n 1
✅ 출력
127.0.0.1 webapp.istioinaction.io
curl -v https://webapp.istioinaction.io:30005/api/catalog \
--cacert ch4/certs/2_intermediate/certs/ca-chain.cert.pem
✅ 출력
* Host webapp.istioinaction.io:30005 was resolved.
* IPv6: (none)
* IPv4: 127.0.0.1
* Trying 127.0.0.1:30005...
* ALPN: curl offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* CAfile: ch4/certs/2_intermediate/certs/ca-chain.cert.pem
* CApath: none
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 / x25519 / RSASSA-PSS
* ALPN: server accepted h2
* Server certificate:
* subject: C=US; ST=Denial; L=Springfield; O=Dis; CN=webapp.istioinaction.io
* start date: Jul 4 12:49:32 2021 GMT
* expire date: Jun 29 12:49:32 2041 GMT
* common name: webapp.istioinaction.io (matched)
* issuer: C=US; ST=Denial; O=Dis; CN=webapp.istioinaction.io
* SSL certificate verify ok.
* Certificate level 0: Public key type RSA (2048/112 Bits/secBits), signed using sha256WithRSAEncryption
* Certificate level 1: Public key type RSA (4096/152 Bits/secBits), signed using sha256WithRSAEncryption
* Connected to webapp.istioinaction.io (127.0.0.1) port 30005
* using HTTP/2
* [HTTP/2] [1] OPENED stream for https://webapp.istioinaction.io:30005/api/catalog
* [HTTP/2] [1] [:method: GET]
* [HTTP/2] [1] [:scheme: https]
* [HTTP/2] [1] [:authority: webapp.istioinaction.io:30005]
* [HTTP/2] [1] [:path: /api/catalog]
* [HTTP/2] [1] [user-agent: curl/8.13.0]
* [HTTP/2] [1] [accept: */*]
> GET /api/catalog HTTP/2
> Host: webapp.istioinaction.io:30005
> User-Agent: curl/8.13.0
> Accept: */*
>
* Request completely sent off
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
< HTTP/2 200
< content-length: 357
< content-type: application/json; charset=utf-8
< date: Sat, 19 Apr 2025 14:29:37 GMT
< x-envoy-upstream-service-time: 12
< server: istio-envoy
<
* Connection #0 to host webapp.istioinaction.io left intact
[{"id":1,"color":"amber","department":"Eyewear","name":"Elinor Glasses","price":"282.00"},{"id":2,"color":"cyan","department":"Clothing","name":"Atlas Shirt","price":"127.00"},{"id":3,"color":"teal","department":"Clothing","name":"Small Metal Shoes","price":"232.00"},{"id":4,"color":"red","department":"Watches","name":"Red Dragon Watch","price":"232.00"}]%
https://webapp.istioinaction.io:30005 접속


curl -v http://webapp.istioinaction.io:30000/api/catalog
✅ 출력
* Host webapp.istioinaction.io:30000 was resolved.
* IPv6: (none)
* IPv4: 127.0.0.1
* Trying 127.0.0.1:30000...
* Connected to webapp.istioinaction.io (127.0.0.1) port 30000
* using HTTP/1.x
> GET /api/catalog HTTP/1.1
> Host: webapp.istioinaction.io:30000
> User-Agent: curl/8.13.0
> Accept: */*
>
* Request completely sent off
< HTTP/1.1 200 OK
< content-length: 357
< content-type: application/json; charset=utf-8
< date: Sat, 19 Apr 2025 14:33:39 GMT
< x-envoy-upstream-service-time: 10
< server: istio-envoy
<
* Connection #0 to host webapp.istioinaction.io left intact
[{"id":1,"color":"amber","department":"Eyewear","name":"Elinor Glasses","price":"282.00"},{"id":2,"color":"cyan","department":"Clothing","name":"Atlas Shirt","price":"127.00"},{"id":3,"color":"teal","department":"Clothing","name":"Small Metal Shoes","price":"232.00"},{"id":4,"color":"red","department":"Watches","name":"Red Dragon Watch","price":"232.00"}]%
kubernetes://webapp-credential 시크릿 체인이 ACTIVE 상태로 로드됨
docker exec -it myk8s-control-plane istioctl proxy-config secret deploy/istio-ingressgateway.istio-system
✅ 출력
RESOURCE NAME TYPE STATUS VALID CERT SERIAL NUMBER NOT AFTER NOT BEFORE
default Cert Chain ACTIVE true 123240359314213262205135588074267083810 2025-04-20T12:24:34Z 2025-04-19T12:22:34Z
kubernetes://webapp-credential Cert Chain ACTIVE true 1049106 2041-06-29T12:49:32Z 2021-07-04T12:49:32Z
ROOTCA CA ACTIVE true 148534163974034505686054778049611152978 2035-04-17T12:24:23Z 2025-04-19T12:24:23Z
HTTP(80)로 들어오는 모든 요청을 HTTPS(443)로 강제 리다이렉트
cat ch4/coolstore-gw-tls-redirect.yaml
✅ 출력
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: coolstore-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "webapp.istioinaction.io"
tls:
httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: webapp-credential
hosts:
- "webapp.istioinaction.io"
kubectl apply -f ch4/coolstore-gw-tls-redirect.yaml
# 결과
gateway.networking.istio.io/coolstore-gateway created
HTTP 요청에 대해 301 상태와 함께 동일 경로의 HTTPS URL로 리다이렉트됨
curl -v http://webapp.istioinaction.io:30000/api/catalog
✅ 출력
* Host webapp.istioinaction.io:30000 was resolved.
* IPv6: (none)
* IPv4: 127.0.0.1
* Trying 127.0.0.1:30000...
* Connected to webapp.istioinaction.io (127.0.0.1) port 30000
* using HTTP/1.x
> GET /api/catalog HTTP/1.1
> Host: webapp.istioinaction.io:30000
> User-Agent: curl/8.13.0
> Accept: */*
>
* Request completely sent off
< HTTP/1.1 301 Moved Permanently
< location: https://webapp.istioinaction.io:30000/api/catalog
< date: Sat, 19 Apr 2025 14:38:56 GMT
< server: istio-envoy
< content-length: 0
<
* Connection #0 to host webapp.istioinaction.io left intact
openssl x509 -in ch4/certs/2_intermediate/certs/ca-chain.cert.pem -noout -text
✅ 출력
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 1049106 (0x100212)
Signature Algorithm: sha256WithRSAEncryption
Issuer: C=US, ST=Denial, L=Springfield, O=Dis, CN=webapp.istioinaction.io
Validity
Not Before: Jul 4 12:49:29 2021 GMT
Not After : Jun 29 12:49:29 2041 GMT
Subject: C=US, ST=Denial, O=Dis, CN=webapp.istioinaction.io
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (4096 bit)
Modulus:
00:c9:5f:92:1e:92:1d:a4:7c:2b:07:81:44:8b:a5:
5a:09:47:54:25:b8:7a:03:0a:f3:34:84:ed:91:94:
c4:50:ce:8d:c9:ef:30:2b:f9:a7:72:ca:57:f6:02:
63:af:64:e4:9e:8d:54:ea:fa:c2:9a:a4:b5:b3:0e:
ae:2a:5a:12:c4:3a:22:44:2f:a4:73:33:8f:52:10:
11:e7:c6:cf:c7:75:32:cd:f6:b0:e9:43:73:f9:48:
c7:dd:e9:e4:29:2c:82:07:7a:9e:bd:30:4c:7e:16:
12:b8:89:b8:9d:d6:cd:37:98:98:53:65:24:cb:75:
99:37:39:76:39:0a:75:c1:48:58:45:b6:ae:41:0d:
ee:2d:74:f5:a3:5e:71:44:b8:88:f8:54:b2:ba:19:
12:90:88:fd:9d:67:f8:67:ea:d9:db:0e:00:f7:1f:
ac:7b:58:f8:aa:30:27:13:21:ae:e7:1c:39:1a:53:
b7:45:71:50:a3:af:49:b8:85:3e:da:80:93:24:de:
41:b5:07:34:ca:52:52:1f:e6:d9:25:9b:63:99:98:
2f:09:fc:93:2d:95:ef:36:98:d2:6b:78:e8:2e:8e:
c4:d3:53:db:d9:ae:2f:95:82:49:46:c2:4f:77:e8:
36:8d:ba:69:91:b5:09:2a:a7:96:07:b8:32:da:04:
5e:5c:5e:c6:03:c7:77:12:6b:23:68:ff:e8:d5:02:
72:a1:d3:42:d5:34:fc:11:33:ef:37:03:ec:36:ba:
eb:ae:f1:4d:31:21:f4:b3:08:26:83:ca:98:44:10:
12:f8:18:5a:56:f1:ed:0c:17:40:62:e4:56:b5:b0:
86:59:e2:ca:84:30:29:f0:72:dc:2a:f8:ca:15:63:
ad:05:c3:a3:dc:13:0e:86:63:b3:b7:ad:0d:76:85:
5d:02:16:d3:3f:26:14:78:92:40:b4:f8:e5:ce:ef:
3a:44:fd:cb:c7:05:c2:81:0c:84:54:0e:15:b0:02:
00:4f:18:e0:8f:bb:02:f3:54:3a:84:7c:09:5c:82:
4a:de:9b:25:b7:42:84:40:16:56:cf:8c:cb:e6:40:
18:23:4f:eb:8e:d6:d0:ae:1c:34:ef:da:1c:ba:7e:
60:d8:f2:1b:f8:90:3d:02:1b:19:6e:69:12:ff:39:
21:2a:9c:08:64:17:09:39:f7:57:ed:41:5d:cf:28:
f0:b9:d7:19:9b:02:ec:6f:5b:1b:66:24:f4:4f:21:
31:4a:9d:b0:a9:2c:a5:93:34:05:8f:3e:88:6d:aa:
c2:5f:b8:a1:df:3b:e9:31:64:42:8c:a0:65:fe:53:
87:6a:c1:d7:85:a9:99:9e:62:b0:da:18:d2:40:17:
b9:bb:e9
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Subject Key Identifier:
B9:F3:84:08:22:37:2C:D3:75:18:D2:07:C4:6F:4E:67:A9:0C:7D:14
X509v3 Authority Key Identifier:
71:EB:8B:A0:49:F3:B2:13:60:82:4B:10:18:8E:C8:85:73:3E:D0:69
X509v3 Basic Constraints: critical
CA:TRUE, pathlen:0
X509v3 Key Usage: critical
Digital Signature, Certificate Sign, CRL Sign
Signature Algorithm: sha256WithRSAEncryption
Signature Value:
74:53:8d:12:30:17:53:7f:08:03:f0:f6:81:a6:f8:14:21:31:
ab:eb:0e:6c:5a:0e:3b:05:18:96:75:30:3d:45:33:9a:c0:7f:
ea:b5:7f:2b:a6:df:ef:28:66:db:30:25:a0:70:45:c5:a4:38:
17:fa:de:5d:79:91:57:bf:be:bc:fd:7a:77:ba:f4:53:5c:ea:
c6:2c:15:a7:53:84:22:81:52:ae:43:5e:4e:e9:cc:10:21:08:
04:da:e6:c6:77:27:bc:20:8d:cf:fe:b9:e0:c5:43:09:37:d5:
07:29:a1:32:d1:cd:6e:45:5e:3a:ad:22:b6:29:92:e0:81:42:
8e:01:d2:3b:16:49:8d:6f:1e:b5:c0:e3:d9:75:29:91:05:ee:
3f:20:07:9b:d2:06:27:15:e5:4a:78:5b:cd:71:db:55:ba:62:
47:b5:3c:76:6a:8c:ba:12:7e:df:b4:73:88:fa:72:71:36:cd:
ef:93:d6:62:1b:ea:61:bc:56:6d:bd:b8:f5:7b:70:a0:6d:29:
92:33:fc:79:f9:bb:3e:bd:a8:df:a7:b2:3a:e4:bd:5b:fe:33:
1d:9e:26:e5:94:34:5d:e1:39:42:15:77:1c:3d:05:80:0d:28:
72:7e:f7:f7:83:05:33:5d:12:cb:8c:e3:e3:5e:c5:23:3c:89:
fc:c6:7c:c4:33:b1:ef:31:80:6c:6d:e7:c0:eb:18:d6:e5:aa:
23:72:19:45:ba:70:79:5a:cd:ac:57:e2:d2:39:9f:ff:b3:c8:
e0:a5:bd:5d:07:90:e8:c3:88:63:94:2c:26:18:f6:80:9c:86:
62:c6:f0:fc:4c:2b:88:51:e8:a6:79:b0:fd:c0:a1:93:e0:b9:
98:da:7a:65:41:ba:fa:23:2d:58:9b:2b:52:6a:c1:66:a6:06:
2f:af:4d:1b:be:b4:b4:e2:65:37:6b:c5:4c:20:f9:a2:67:8a:
a6:c3:50:5b:ba:9f:65:29:96:d4:c2:82:2b:3e:67:cb:29:3e:
db:5a:f2:d1:5f:56:24:05:06:4d:10:27:83:19:46:da:49:bc:
ab:38:a5:cb:d8:89:0f:ee:57:57:9a:0e:c5:d5:11:bd:d8:1a:
68:97:39:c0:7e:6c:56:ec:90:72:f9:78:9a:a5:c6:0b:b3:b9:
ab:bc:10:43:4a:e6:09:02:f3:10:ca:57:9c:a5:c3:60:49:c4:
ac:8a:23:f2:03:56:96:11:15:a9:ae:79:64:b6:fc:cf:99:db:
9e:32:83:b1:65:cd:33:44:c5:d5:92:fa:9a:0f:1d:50:5a:12:
f0:d3:ed:e0:0b:c6:c3:13:02:13:c3:57:ec:4c:fe:b8:85:bd:
13:67:e0:e8:0c:36:e4:0e
kubectl create -n istio-system secret \
generic webapp-credential-mtls --from-file=tls.key=\
ch4/certs/3_application/private/webapp.istioinaction.io.key.pem \
--from-file=tls.crt=\
ch4/certs/3_application/certs/webapp.istioinaction.io.cert.pem \
--from-file=ca.crt=\
ch4/certs/2_intermediate/certs/ca-chain.cert.pem
# 결과
secret/webapp-credential-mtls created
cat ch4/coolstore-gw-mtls.yaml
✅ 출력
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: coolstore-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "webapp.istioinaction.io"
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: MUTUAL
credentialName: webapp-credential-mtls
hosts:
- "webapp.istioinaction.io"
kubectl apply -f ch4/coolstore-gw-mtls.yaml -n istioinaction
# 결과
gateway.networking.istio.io/coolstore-gateway configured
docker exec -it myk8s-control-plane istioctl proxy-config secret deploy/istio-ingressgateway.istio-system
✅ 출력
RESOURCE NAME TYPE STATUS VALID CERT SERIAL NUMBER NOT AFTER NOT BEFORE
kubernetes://webapp-credential-mtls Cert Chain ACTIVE true 1049106 2041-06-29T12:49:32Z 2021-07-04T12:49:32Z
default Cert Chain ACTIVE true 123240359314213262205135588074267083810 2025-04-20T12:24:34Z 2025-04-19T12:22:34Z
kubernetes://webapp-credential Cert Chain ACTIVE true 1049106 2041-06-29T12:49:32Z 2021-07-04T12:49:32Z
kubernetes://webapp-credential-mtls-cacert CA ACTIVE true 1049106 2041-06-29T12:49:29Z 2021-07-04T12:49:29Z
ROOTCA CA ACTIVE true 148534163974034505686054778049611152978 2035-04-17T12:24:23Z 2025-04-19T12:24:23Z
curl -v https://webapp.istioinaction.io:30005/api/catalog \
--cacert ch4/certs/2_intermediate/certs/ca-chain.cert.pem
✅ 출력
* Host webapp.istioinaction.io:30005 was resolved.
* IPv6: (none)
* IPv4: 127.0.0.1
* Trying 127.0.0.1:30005...
* ALPN: curl offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* CAfile: ch4/certs/2_intermediate/certs/ca-chain.cert.pem
* CApath: none
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Request CERT (13):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Certificate (11):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 / x25519 / RSASSA-PSS
* ALPN: server accepted h2
* Server certificate:
* subject: C=US; ST=Denial; L=Springfield; O=Dis; CN=webapp.istioinaction.io
* start date: Jul 4 12:49:32 2021 GMT
* expire date: Jun 29 12:49:32 2041 GMT
* common name: webapp.istioinaction.io (matched)
* issuer: C=US; ST=Denial; O=Dis; CN=webapp.istioinaction.io
* SSL certificate verify ok.
* Certificate level 0: Public key type RSA (2048/112 Bits/secBits), signed using sha256WithRSAEncryption
* Certificate level 1: Public key type RSA (4096/152 Bits/secBits), signed using sha256WithRSAEncryption
* Connected to webapp.istioinaction.io (127.0.0.1) port 30005
* using HTTP/2
* [HTTP/2] [1] OPENED stream for https://webapp.istioinaction.io:30005/api/catalog
* [HTTP/2] [1] [:method: GET]
* [HTTP/2] [1] [:scheme: https]
* [HTTP/2] [1] [:authority: webapp.istioinaction.io:30005]
* [HTTP/2] [1] [:path: /api/catalog]
* [HTTP/2] [1] [user-agent: curl/8.13.0]
* [HTTP/2] [1] [accept: */*]
> GET /api/catalog HTTP/2
> Host: webapp.istioinaction.io:30005
> User-Agent: curl/8.13.0
> Accept: */*
>
* TLSv1.3 (IN), TLS alert, unknown (628):
* OpenSSL SSL_read: OpenSSL/3.5.0: error:0A00045C:SSL routines::tlsv13 alert certificate required, errno 0
* Failed receiving HTTP2 data: 56(Failure when receiving data from the peer)
* Connection #0 to host webapp.istioinaction.io left intact
curl: (56) OpenSSL SSL_read: OpenSSL/3.5.0: error:0A00045C:SSL routines::tlsv13 alert certificate required, errno 0
웹브라우저에서 확인 시 클라이언트 인증서 확인되지 않아서 접속 실패 확인
https://webapp.istioinaction.io:30005

curl -v https://webapp.istioinaction.io:30005/api/catalog \
--cacert ch4/certs/2_intermediate/certs/ca-chain.cert.pem \
--cert ch4/certs/4_client/certs/webapp.istioinaction.io.cert.pem \
--key ch4/certs/4_client/private/webapp.istioinaction.io.key.pem
✅ 출력
* Host webapp.istioinaction.io:30005 was resolved.
* IPv6: (none)
* IPv4: 127.0.0.1
* Trying 127.0.0.1:30005...
* ALPN: curl offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* CAfile: ch4/certs/2_intermediate/certs/ca-chain.cert.pem
* CApath: none
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Request CERT (13):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Certificate (11):
* TLSv1.3 (OUT), TLS handshake, CERT verify (15):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 / x25519 / RSASSA-PSS
* ALPN: server accepted h2
* Server certificate:
* subject: C=US; ST=Denial; L=Springfield; O=Dis; CN=webapp.istioinaction.io
* start date: Jul 4 12:49:32 2021 GMT
* expire date: Jun 29 12:49:32 2041 GMT
* common name: webapp.istioinaction.io (matched)
☸ kind-myk8s in istio-in-action/book-source-code-master on main [!] on ☁️ (ap-northeast-2)
✦ ❯ curl -v https://webapp.istioinaction.io:30005/api/catalog \
--cacert ch4/certs/2_intermediate/certs/ca-chain.cert.pem \
--cert ch4/certs/4_client/certs/webapp.istioinaction.io.cert.pem \
--key ch4/certs/4_client/private/webapp.istioinaction.io.key.pem
* Host webapp.istioinaction.io:30005 was resolved.
* IPv6: (none)
* IPv4: 127.0.0.1
* Trying 127.0.0.1:30005...
* ALPN: curl offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* CAfile: ch4/certs/2_intermediate/certs/ca-chain.cert.pem
* CApath: none
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Request CERT (13):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Certificate (11):
* TLSv1.3 (OUT), TLS handshake, CERT verify (15):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 / x25519 / RSASSA-PSS
* ALPN: server accepted h2
* Server certificate:
* subject: C=US; ST=Denial; L=Springfield; O=Dis; CN=webapp.istioinaction.io
* start date: Jul 4 12:49:32 2021 GMT
* expire date: Jun 29 12:49:32 2041 GMT
* common name: webapp.istioinaction.io (matched)
* issuer: C=US; ST=Denial; O=Dis; CN=webapp.istioinaction.io
* SSL certificate verify ok.
* Certificate level 0: Public key type RSA (2048/112 Bits/secBits), signed using sha256WithRSAEncryption
* Certificate level 1: Public key type RSA (4096/152 Bits/secBits), signed using sha256WithRSAEncryption
* Connected to webapp.istioinaction.io (127.0.0.1) port 30005
* using HTTP/2
* [HTTP/2] [1] OPENED stream for https://webapp.istioinaction.io:30005/api/catalog
* [HTTP/2] [1] [:method: GET]
* [HTTP/2] [1] [:scheme: https]
* [HTTP/2] [1] [:authority: webapp.istioinaction.io:30005]
* [HTTP/2] [1] [:path: /api/catalog]
* [HTTP/2] [1] [user-agent: curl/8.13.0]
* [HTTP/2] [1] [accept: */*]
> GET /api/catalog HTTP/2
> Host: webapp.istioinaction.io:30005
> User-Agent: curl/8.13.0
> Accept: */*
>
* Request completely sent off
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
< HTTP/2 200
< content-length: 357
< content-type: application/json; charset=utf-8
< date: Sat, 19 Apr 2025 14:48:49 GMT
< x-envoy-upstream-service-time: 5
< server: istio-envoy
<
* Connection #0 to host webapp.istioinaction.io left intact
[{"id":1,"color":"amber","department":"Eyewear","name":"Elinor Glasses","price":"282.00"},{"id":2,"color":"cyan","department":"Clothing","name":"Atlas Shirt","price":"127.00"},{"id":3,"color":"teal","department":"Clothing","name":"Small Metal Shoes","price":"232.00"},{"id":4,"color":"red","department":"Watches","name":"Red Dragon Watch","price":"232.00"}]%
cat ch4/coolstore-gw-multi-tls.yaml
✅ 출력
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: coolstore-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https-webapp
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: webapp-credential
hosts:
- "webapp.istioinaction.io"
- port:
number: 443
name: https-catalog
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: catalog-credential
hosts:
- "catalog.istioinaction.io"
kubectl create -n istio-system secret tls catalog-credential \
--key ch4/certs2/3_application/private/catalog.istioinaction.io.key.pem \
--cert ch4/certs2/3_application/certs/catalog.istioinaction.io.cert.pem
# 결과
secret/catalog-credential created
kubectl apply -f ch4/coolstore-gw-multi-tls.yaml -n istioinaction
# 결과
gateway.networking.istio.io/coolstore-gateway configured
cat ch4/catalog-vs.yaml
✅ 출력
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: catalog-vs-from-gw
spec:
hosts:
- "catalog.istioinaction.io"
gateways:
- coolstore-gateway
http:
- route:
- destination:
host: catalog
port:
number: 80
kubectl apply -f ch4/catalog-vs.yaml -n istioinaction
# 결과
virtualservice.networking.istio.io/catalog-vs-from-gw created
echo "127.0.0.1 catalog.istioinaction.io" | sudo tee -a /etc/hosts
cat /etc/hosts | tail -n 2
✅ 출력
127.0.0.1 webapp.istioinaction.io
127.0.0.1 catalog.istioinaction.io
curl -v https://webapp.istioinaction.io:30005/api/catalog \
--cacert ch4/certs/2_intermediate/certs/ca-chain.cert.pem
✅ 출력
* Host webapp.istioinaction.io:30005 was resolved.
* IPv6: (none)
* IPv4: 127.0.0.1
* Trying 127.0.0.1:30005...
* ALPN: curl offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* CAfile: ch4/certs/2_intermediate/certs/ca-chain.cert.pem
* CApath: none
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 / x25519 / RSASSA-PSS
* ALPN: server accepted h2
* Server certificate:
* subject: C=US; ST=Denial; L=Springfield; O=Dis; CN=webapp.istioinaction.io
* start date: Jul 4 12:49:32 2021 GMT
* expire date: Jun 29 12:49:32 2041 GMT
* common name: webapp.istioinaction.io (matched)
* issuer: C=US; ST=Denial; O=Dis; CN=webapp.istioinaction.io
* SSL certificate verify ok.
* Certificate level 0: Public key type RSA (2048/112 Bits/secBits), signed using sha256WithRSAEncryption
* Certificate level 1: Public key type RSA (4096/152 Bits/secBits), signed using sha256WithRSAEncryption
* Connected to webapp.istioinaction.io (127.0.0.1) port 30005
* using HTTP/2
* [HTTP/2] [1] OPENED stream for https://webapp.istioinaction.io:30005/api/catalog
* [HTTP/2] [1] [:method: GET]
* [HTTP/2] [1] [:scheme: https]
* [HTTP/2] [1] [:authority: webapp.istioinaction.io:30005]
* [HTTP/2] [1] [:path: /api/catalog]
* [HTTP/2] [1] [user-agent: curl/8.13.0]
* [HTTP/2] [1] [accept: */*]
> GET /api/catalog HTTP/2
> Host: webapp.istioinaction.io:30005
> User-Agent: curl/8.13.0
> Accept: */*
>
* Request completely sent off
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
< HTTP/2 200
< content-length: 357
< content-type: application/json; charset=utf-8
< date: Sat, 19 Apr 2025 14:56:54 GMT
< x-envoy-upstream-service-time: 6
< server: istio-envoy
<
* Connection #0 to host webapp.istioinaction.io left intact
[{"id":1,"color":"amber","department":"Eyewear","name":"Elinor Glasses","price":"282.00"},{"id":2,"color":"cyan","department":"Clothing","name":"Atlas Shirt","price":"127.00"},{"id":3,"color":"teal","department":"Clothing","name":"Small Metal Shoes","price":"232.00"},{"id":4,"color":"red","department":"Watches","name":"Red Dragon Watch","price":"232.00"}]
curl -v https://catalog.istioinaction.io:30005/items \
--cacert ch4/certs2/2_intermediate/certs/ca-chain.cert.pem
✅ 출력
* Host catalog.istioinaction.io:30005 was resolved.
* IPv6: (none)
* IPv4: 127.0.0.1
* Trying 127.0.0.1:30005...
* ALPN: curl offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* CAfile: ch4/certs2/2_intermediate/certs/ca-chain.cert.pem
* CApath: none
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 / x25519 / RSASSA-PSS
* ALPN: server accepted h2
* Server certificate:
* subject: C=US; ST=Denial; L=Springfield; O=Dis; CN=catalog.istioinaction.io
* start date: Jul 4 13:30:38 2021 GMT
* expire date: Jun 29 13:30:38 2041 GMT
* common name: catalog.istioinaction.io (matched)
* issuer: C=US; ST=Denial; O=Dis; CN=catalog.istioinaction.io
* SSL certificate verify ok.
* Certificate level 0: Public key type RSA (2048/112 Bits/secBits), signed using sha256WithRSAEncryption
* Certificate level 1: Public key type RSA (4096/152 Bits/secBits), signed using sha256WithRSAEncryption
* Connected to catalog.istioinaction.io (127.0.0.1) port 30005
* using HTTP/2
* [HTTP/2] [1] OPENED stream for https://catalog.istioinaction.io:30005/items
* [HTTP/2] [1] [:method: GET]
* [HTTP/2] [1] [:scheme: https]
* [HTTP/2] [1] [:authority: catalog.istioinaction.io:30005]
* [HTTP/2] [1] [:path: /items]
* [HTTP/2] [1] [user-agent: curl/8.13.0]
* [HTTP/2] [1] [accept: */*]
> GET /items HTTP/2
> Host: catalog.istioinaction.io:30005
> User-Agent: curl/8.13.0
> Accept: */*
>
* Request completely sent off
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
< HTTP/2 200
< x-powered-by: Express
< vary: Origin, Accept-Encoding
< access-control-allow-credentials: true
< cache-control: no-cache
< pragma: no-cache
< expires: -1
< content-type: application/json; charset=utf-8
< content-length: 502
< etag: W/"1f6-ih2h+hDQ0yLLcKIlBvwkWbyQGK4"
< date: Sat, 19 Apr 2025 14:57:33 GMT
< x-envoy-upstream-service-time: 10
< server: istio-envoy
<
[
{
"id": 1,
"color": "amber",
"department": "Eyewear",
"name": "Elinor Glasses",
"price": "282.00"
},
{
"id": 2,
"color": "cyan",
"department": "Clothing",
"name": "Atlas Shirt",
"price": "127.00"
},
{
"id": 3,
"color": "teal",
"department": "Clothing",
"name": "Small Metal Shoes",
"price": "232.00"
},
{
"id": 4,
"color": "red",
"department": "Watches",
"name": "Red Dragon Watch",
"price": "232.00"
}
* Connection #0 to host catalog.istioinaction.io left intact
]
cat ch4/echo.yaml
✅ 출력
apiVersion: apps/v1
kind: Deployment
metadata:
name: tcp-echo-deployment
labels:
app: tcp-echo
system: example
spec:
replicas: 1
selector:
matchLabels:
app: tcp-echo
template:
metadata:
labels:
app: tcp-echo
system: example
spec:
containers:
- name: tcp-echo-container
image: cjimti/go-echo:latest
imagePullPolicy: IfNotPresent
env:
- name: TCP_PORT
value: "2701"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
ports:
- name: tcp-echo-port
containerPort: 2701
---
apiVersion: v1
kind: Service
metadata:
name: "tcp-echo-service"
labels:
app: tcp-echo
system: example
spec:
selector:
app: "tcp-echo"
ports:
- protocol: "TCP"
port: 2701
targetPort: 2701
kubectl apply -f ch4/echo.yaml -n istioinaction
# 결과
deployment.apps/tcp-echo-deployment created
service/tcp-echo-service created
kubectl get pod -n istioinaction
✅ 출력
NAME READY STATUS RESTARTS AGE
catalog-6cf4b97d-zcn65 2/2 Running 0 69m
tcp-echo-deployment-584f6d6d6b-w65wd 2/2 Running 0 17s
webapp-7685bcb84-2kjbl 2/2 Running 0 69m
(1) istio-ingressgateway Service 편집
kubectl edit svc istio-ingressgateway -n istio-system
...
- name: tcp
nodePort: 30006
port: 31400
protocol: TCP
targetPort: 31400
...
✅ 출력
service/istio-ingressgateway edited
(2) 확인
kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.spec.ports[?(@.name=="tcp")]}'
✅ 출력
{"name":"tcp","nodePort":30006,"port":31400,"protocol":"TCP","targetPort":31400}
(1) Gateway 정의 (gateway-tcp.yaml)
cat ch4/gateway-tcp.yaml
✅ 출력
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: echo-tcp-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 31400
name: tcp-echo
protocol: TCP
hosts:
- "*"
(2) Gateway 적용
kubectl apply -f ch4/gateway-tcp.yaml -n istioinaction
# 결과
gateway.networking.istio.io/echo-tcp-gateway created
(3) echo-tcp-gateway 확인
kubectl get gw -n istioinaction
✅ 출력
NAME AGE
coolstore-gateway 105m
echo-tcp-gateway 23s
(1) VirtualService 정의 (echo-vs.yaml)
cat ch4/echo-vs.yaml
✅ 출력
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: tcp-echo-vs-from-gw
spec:
hosts:
- "*"
gateways:
- echo-tcp-gateway
tcp:
- match:
- port: 31400
route:
- destination:
host: tcp-echo-service
port:
number: 2701
(2) 적용
kubectl apply -f ch4/echo-vs.yaml -n istioinaction
# 결과
virtualservice.networking.istio.io/tcp-echo-vs-from-gw created
(3) tcp-echo-vs-from-gw 확인
kubectl get vs -n istioinaction
✅ 출력
NAME GATEWAYS HOSTS AGE
catalog-vs-from-gw ["coolstore-gateway"] ["catalog.istioinaction.io"] 32m
tcp-echo-vs-from-gw ["echo-tcp-gateway"] ["*"] 25s
webapp-vs-from-gw ["coolstore-gateway"] ["webapp.istioinaction.io"] 97m
(1) 연결
telnet localhost 30006
✅ 출력
Trying ::1...
Connection failed: Connection refused
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Welcome, you are connected to node myk8s-control-plane.
Running on Pod tcp-echo-deployment-584f6d6d6b-w65wd.
In namespace istioinaction.
With IP address 10.10.0.15.
Service default.
hello istio! # <-- type here
hello istio! # <-- echo here
(2) 종료
# telnet 종료하기 : 세션종료 Ctrl + ] > 텔넷 종료 quit
telnet> quit
Connection closed.
(1) simple-tls-service-1 파일 확인
cat ch4/sni/simple-tls-service-1.yaml
✅ 출력
---
apiVersion: v1
kind: Service
metadata:
labels:
app: simple-tls-service-1
name: simple-tls-service-1
spec:
ports:
- name: https
port: 80
protocol: TCP
targetPort: 8080
selector:
app: simple-tls-service-1
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: simple-tls-service-1
name: simple-tls-service-1
spec:
replicas: 1
selector:
matchLabels:
app: simple-tls-service-1
template:
metadata:
labels:
app: simple-tls-service-1
spec:
containers:
- env:
- name: "LISTEN_ADDR"
value: "0.0.0.0:8080"
- name: "SERVER_TYPE"
value: "http"
- name: "TLS_CERT_LOCATION"
value: "/etc/certs/tls.crt"
- name: "TLS_KEY_LOCATION"
value: "/etc/certs/tls.key"
- name: "NAME"
value: "simple-tls-service-1"
- name: "MESSAGE"
value: "Hello from simple-tls-service-1!!!"
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: nicholasjackson/fake-service:v0.14.1
imagePullPolicy: IfNotPresent
name: simple-tls-service
ports:
- containerPort: 8080
name: http
protocol: TCP
securityContext:
privileged: false
volumeMounts:
- mountPath: /etc/certs
name: tls-certs
volumes:
- name: tls-certs
secret:
secretName: simple-sni-1.istioinaction.io
---
apiVersion: v1
data:
tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZjVENDQTFtZ0F3SUJBZ0lERUFJU01BMEdDU3FHU0liM0RRRUJDd1VBTUZReEN6QUpCZ05WQkFZVEFsVlQKTVE4d0RRWURWUVFJREFaRVpXNXBZV3d4RERBS0JnTlZCQW9NQTBScGN6RW1NQ1FHQTFVRUF3d2RjMmx0Y0d4bApMWE51YVMweExtbHpkR2x2YVc1aFkzUnBiMjR1YVc4d0hoY05NakF3T1RBek1UUTBOelUxV2hjTk5EQXdPREk1Ck1UUTBOelUxV2pCcU1Rc3dDUVlEVlFRR0V3SlZVekVQTUEwR0ExVUVDQXdHUkdWdWFXRnNNUlF3RWdZRFZRUUgKREF0VGNISnBibWRtYVdWc1pERU1NQW9HQTFVRUNnd0RSR2x6TVNZd0pBWURWUVFEREIxemFXMXdiR1V0YzI1cApMVEV1YVhOMGFXOXBibUZqZEdsdmJpNXBiekNDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DCmdnRUJBUEVwNWI2L255VmtWakttaDRsUVg0cSs2YWNxRWVPVG1aUkIvYzIxZ1hWVWFvQ1hXY3ExMEYzbVVqSTMKQStYOENKOFZHdkNvWTBZSnVIR2h2cmV2cUNrUmdVYlBMVXM1ZFd1eUpxemZiT01HN3B6UVJvSEZRTm5IUjlkMAowTS9JVjNOOVhpVHJrQ1B5U3BYMFVPL1VtZzZRd2pZRmllWDFnK2M1RERlRjN1amtmQ2dCd3hNd1cxVFpIV28zCk1paTdudHVsZEQzS2k4clovdFFXQU02OXNCWVN1eWxaQ1ZQODNhL2VPYTZ3aVQrcE5sVzV2K3pUVGFKeURDV2sKRnFPYVhVeExiZ0ttOXpPOXZ4aVo0VlJlNnI0L0ZRdFNBZXdDWWlXdkRQS0FOVFRiUE96NmNBS2VPN1ZuQVk4ZwozMmtyK21VbVErL1gxNkx4ZVN2ck52Y3QzQU1DQXdFQUFhT0NBVFF3Z2dFd01Ba0dBMVVkRXdRQ01BQXdFUVlKCllJWklBWWI0UWdFQkJBUURBZ1pBTURNR0NXQ0dTQUdHK0VJQkRRUW1GaVJQY0dWdVUxTk1JRWRsYm1WeVlYUmwKWkNCVFpYSjJaWElnUTJWeWRHbG1hV05oZEdVd0hRWURWUjBPQkJZRUZOckFaOGFsWFNVazludDk0UklSK25QLwpCSzZTTUlHV0JnTlZIU01FZ1k0d2dZdUFGS2tLL21HRkhJUisrbjgvY1dqaEtnalBTamY1b1c2a2JEQnFNUXN3CkNRWURWUVFHRXdKVlV6RVBNQTBHQTFVRUNBd0dSR1Z1YVdGc01SUXdFZ1lEVlFRSERBdFRjSEpwYm1kbWFXVnMKWkRFTU1Bb0dBMVVFQ2d3RFJHbHpNU1l3SkFZRFZRUUREQjF6YVcxd2JHVXRjMjVwTFRFdWFYTjBhVzlwYm1GagpkR2x2Ymk1cGI0SURFQUlTTUE0R0ExVWREd0VCL3dRRUF3SUZvREFUQmdOVkhTVUVEREFLQmdnckJnRUZCUWNECkFUQU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FnRUFuU1lTN3piS1Zma2JKQWJiWFdVcUxtOURzamp4WDdEaWdqVi8KckN6eHk5b0E1VmV6MlBlRlZnNmFnbEorVVBSb0dyZksraXNiMkpyakk5elBjVFhPVHJtK0VJNXZUaHZqcVFmRApoZmc3cmxYZmxjYmZkRnYxSnYxRVlLdWhIRkdSaWVGYXJKVkFMeDY5bGhYMzJobFNUY2pKQXhUajVHNDV2eFJ1Clg0bUlvckpBTmt5R29vdnRtRGVYZkZnOVdwZCtxWEVhSHlDL3IxUDNwSzBoUlE5b1FqR2RuaCs1NzdzWHBUaEEKbVdXbzB4K2syRGN5cTZDQzEzSittVGovQkJRcTZHS0lGZ2lLR0dJcVJhU2FqZHZRZmVlZXc1RFFjVVljZWYzSgpLYkdqZ2h6WDYzdGdzLzdyV1NRZzJYd2s1WHlROEFBQTFoMmVIL25xc1doWmRKamtaeFMwaVFwNVgveTZaMHdhClcydzRSeTdNdVBJRjgwOHBDOUhacHg0ejhNRStqNWF3RGJZMjdPK1NsY3FJaWNOblR6M3d4N2RhbEE4VXI3L2IKU2dXZ0tVL045RzFSdlJLS0dOR3ZOV0wrYVErNkg4dHhsSXhmdGRoSWFCOHV6MDV6VE1wR2UvOHMyQllpTUsrbApRcTE2ZU1iUEt6TkwxZXpBeGVlaGN5cXVHSEgvbkx5TUludmxkWDRXQWxSWmFSVklDL1NnMlZOSFlUajNNc3VMCndianRZVW9pSXRkcXhGQjB1TVdmL3pCQjJCM0cwdEMzVEtoa2lHNTV3U1pFU0I2M2pXZm9nWlhxVGNCVEFQbUsKbWFUOE8zZ2ZSNndEZVU0aFdMeGllSElaQmorUHdsRTVRbTloYWgxeFlPak0yc0MweXVWSzBNeDBPYmtTeCswZQpzZUVudDM0PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBOFNubHZyK2ZKV1JXTXFhSGlWQmZpcjdwcHlvUjQ1T1psRUg5emJXQmRWUnFnSmRaCnlyWFFYZVpTTWpjRDVmd0lueFVhOEtoalJnbTRjYUcrdDYrb0tSR0JSczh0U3psMWE3SW1yTjlzNHdidW5OQkcKZ2NWQTJjZEgxM1RRejhoWGMzMWVKT3VRSS9KS2xmUlE3OVNhRHBEQ05nV0o1ZldENXprTU40WGU2T1I4S0FIRApFekJiVk5rZGFqY3lLTHVlMjZWMFBjcUx5dG4rMUJZQXpyMndGaEs3S1ZrSlUvemRyOTQ1cnJDSlA2azJWYm0vCjdOTk5vbklNSmFRV281cGRURXR1QXFiM003Mi9HSm5oVkY3cXZqOFZDMUlCN0FKaUphOE04b0ExTk5zODdQcHcKQXA0N3RXY0JqeURmYVN2NlpTWkQ3OWZYb3ZGNUsrczI5eTNjQXdJREFRQUJBb0lCQVFDTU5GUjZHZ05YQk1kTQpPUjZ4Q1FZU3JyMCtUeW9KU2FWTzJUTEo1a3oyUG5hUWZlMVkrV3pET29UczVxa2dpdThrTld2dEg2aGZib1ZKCm9zUXpIQzlDZVFmVWQ5d1lVTFpnUHpsVzVhbnpMdk9JUFZuUVZqSkdxaUd0TkIrMXZQNkNpUTh6bmJPMkFrVzAKZWs2WHI5MUV2SW44U0NvTWhEa0VNMWxUNmtOVzEyWUo4NVdsYmd2UHBUVlREejdHbnZKb0JoTG80amJHTURNVgpaLy9UQlFLVDBISzQ2VHp6UzR1YUsrZU16ZjkzbnBrZXRobkdPUmh5dStwSTUwaGI1UXFnZTlnY2VPK1M3UGJRClM5bWtxdWZ6MEdrYzVTY3NoV0RVNTc5UFFwdzNyTUNEdkNZcXZ3RjFJYjhzakQrS0QwZi91QTlsdG5uQUlUenYKeWQ2M0xQWDVBb0dCQVA3blNLbUk0SGF4WTJFVXNOeDBhQVdLdHJQNVBSMWVLTTA1blhjamVqUEtBVi8wdzd1QwpJd0xGR0hjS1p0UGxkWkJGYTRQa2h5ZzdzMXBHQU1aVjl2dnkyUS9jQkFaMXYzZDQzY3hBZjJtREJJdXlmRFY0CjZHOUhOYk5xVE9vQnhzRDB0akVRV2RDWUNBNGoxZFMvTGszREJ6TnE4Z1RtcGNnK1h3NzJzSlZGQW9HQkFQSXoKZTRKeGxPSE1aNzg0c0pRQTg2TmY2YWlUc2MyeWN1WjEvK2tKd3JFcHpRSFZjd3Q1RUNHOHNMZmtzVUp5WlhqQgphT21PQVkrVkpNMlhNbDlkdFJNWEFac1ZVSXVSa2dub2lZSktqaUNGS3l1eUhYOFc1SjRsd0hmKzhGUkM4MUZ0CnV1a1hoZHB0WGFIQzVpejV6cExwdTF4WjRqZjhqbHozb2t4enZFeW5Bb0dBV0JTbmdSMnhJcEtOV3FDQnRNdnMKbmUyZTBIWFJibko0K3VGcnpoMU9QdE1Rd28yYmpSR2M4M29UeUI2cUJaS0dtMEhCc3lPbXFIcG9zVXI3UFkyNgorTGlqMU4wYjd2ZUZIODErSnZRcWt0VVpId1NmOHdKQ255RW1KMGNXS001UVZhQzV6QjV3U3FvZUxuU25rUW8yCi85dmlneHZ6WVVvcUF4VzZWenRiTFZrQ2dZQldyWElBSnVIZlJTWEQyMmZtTDhrQnFPdVlOdk1rNkQ0U21Cd3oKckJpUENxU2hpV0FZdFFTKzdpWllTWEhlazg0WXZ5N3FscldjU3dYV3hjdHpNYmdCMHZQeUtsaWUra1BIWS84QwpMK2haWHc4cUhoNU1RMGNpQ2VTdGpRRTVScFNKaWJtZ2ZaaWJxUlFmTmY3bURhaU9EelBNUXlhZ1hyUWNOVXRTCkRRRlFkUUtCZ1FDVWFDckoza0k4ZFppQ3pUZlo0YnJxemhoZnVyYjJtdXIzbEk5WTY5RFMxaVVvcXNlZ2t0RkwKdWxmSGN1Z0RKRmt4eFRhUmhmN1dwbE0rQ04xLzZnS2hWUkRrWStxNFBXUTd6YjVLTmhoNFR1YXdEWFZTYVQzVQppbXNmbm95SlI4c1V2bE5nUUkrZmNSbjNkU3dVc2tGTTZVTnBsQ3VXK3dnSE5VNmZmd0V0T3c9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
kind: Secret
metadata:
name: simple-sni-1.istioinaction.io
namespace: istioinaction
type: kubernetes.io/tls
(2) 리소스 생성
kubectl apply -f ch4/sni/simple-tls-service-1.yaml -n istioinaction
# 결과
service/simple-tls-service-1 created
deployment.apps/simple-tls-service-1 created
secret/simple-sni-1.istioinaction.io created
같은 포트(31400/TCP)를 재사용하기 위해 이전에 만든 echo-tcp-gateway 삭제
kubectl delete gateway echo-tcp-gateway -n istioinaction
# 결과
gateway.networking.istio.io "echo-tcp-gateway" deleted
(1) 신규 Gateway 적용
kubectl apply -f ch4/sni/passthrough-sni-gateway.yaml -n istioinaction
# 결과
gateway.networking.istio.io/sni-passthrough-gateway created
(2) 생성된 Gateway 확인
kubectl get gw -n istioinaction
✅ 출력
NAME AGE
coolstore-gateway 114m
sni-passthrough-gateway 19s
(1) passthrough-sni-vs-1.yaml 파일 확인
cat ch4/sni/passthrough-sni-vs-1.yaml
✅ 출력
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: simple-sni-1-vs
spec:
hosts:
- "simple-sni-1.istioinaction.io"
gateways:
- sni-passthrough-gateway
tls:
- match:
- port: 31400
sniHosts:
- simple-sni-1.istioinaction.io
route:
- destination:
host: simple-tls-service-1
port:
number: 80
(2) 적용
kubectl apply -f ch4/sni/passthrough-sni-vs-1.yaml -n istioinaction
# 결과
virtualservice.networking.istio.io/simple-sni-1-vs created
(3) 확인
kubectl get vs -n istioinaction
✅ 출력
NAME GATEWAYS HOSTS AGE
catalog-vs-from-gw ["coolstore-gateway"] ["catalog.istioinaction.io"] 41m
simple-sni-1-vs ["sni-passthrough-gateway"] ["simple-sni-1.istioinaction.io"] 18s
tcp-echo-vs-from-gw ["echo-tcp-gateway"] ["*"] 9m46s
webapp-vs-from-gw ["coolstore-gateway"] ["webapp.istioinaction.io"] 106m
(1) /etc/hosts에 호스트 매핑 추가
echo "127.0.0.1 simple-sni-1.istioinaction.io" | sudo tee -a /etc/hosts
# 결과
127.0.0.1 simple-sni-1.istioinaction.io
(2) TLS 호출
curl https://simple-sni-1.istioinaction.io:30006/ \
--cacert ch4/sni/simple-sni-1/2_intermediate/certs/ca-chain.cert.pem
✅ 출력
{
"name": "simple-tls-service-1",
"uri": "/",
"type": "HTTP",
"ip_addresses": [
"10.10.0.16"
],
"start_time": "2025-04-19T15:37:55.584525",
"end_time": "2025-04-19T15:37:55.584771",
"duration": "246.24µs",
"body": "Hello from simple-tls-service-1!!!",
"code": 200
}
cat ch4/sni/simple-tls-service-2.yaml
✅ 출력
---
apiVersion: v1
kind: Service
metadata:
labels:
app: simple-tls-service-2
name: simple-tls-service-2
spec:
ports:
- name: https
port: 80
protocol: TCP
targetPort: 8080
selector:
app: simple-tls-service-2
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: simple-tls-service-2
name: simple-tls-service-2
spec:
replicas: 1
selector:
matchLabels:
app: simple-tls-service-2
template:
metadata:
labels:
app: simple-tls-service-2
spec:
containers:
- env:
- name: "LISTEN_ADDR"
value: "0.0.0.0:8080"
- name: "TLS_CERT_LOCATION"
value: "/etc/certs/tls.crt"
- name: "TLS_KEY_LOCATION"
value: "/etc/certs/tls.key"
- name: "NAME"
value: "simple-tls-service-2"
- name: "MESSAGE"
value: "Hello from simple-tls-service-2!!!"
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: nicholasjackson/fake-service:v0.14.1
imagePullPolicy: IfNotPresent
name: simple-tls-service
ports:
- containerPort: 8080
name: http
protocol: TCP
securityContext:
privileged: false
volumeMounts:
- mountPath: /etc/certs
name: tls-certs
volumes:
- name: tls-certs
secret:
secretName: simple-sni-2.istioinaction.io
---
apiVersion: v1
data:
tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZjVENDQTFtZ0F3SUJBZ0lERUFJU01BMEdDU3FHU0liM0RRRUJDd1VBTUZReEN6QUpCZ05WQkFZVEFsVlQKTVE4d0RRWURWUVFJREFaRVpXNXBZV3d4RERBS0JnTlZCQW9NQTBScGN6RW1NQ1FHQTFVRUF3d2RjMmx0Y0d4bApMWE51YVMweUxtbHpkR2x2YVc1aFkzUnBiMjR1YVc4d0hoY05NakF3T1RBek1UUTBPRFEzV2hjTk5EQXdPREk1Ck1UUTBPRFEzV2pCcU1Rc3dDUVlEVlFRR0V3SlZVekVQTUEwR0ExVUVDQXdHUkdWdWFXRnNNUlF3RWdZRFZRUUgKREF0VGNISnBibWRtYVdWc1pERU1NQW9HQTFVRUNnd0RSR2x6TVNZd0pBWURWUVFEREIxemFXMXdiR1V0YzI1cApMVEl1YVhOMGFXOXBibUZqZEdsdmJpNXBiekNDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DCmdnRUJBSi92alVneDFiV1BEaGN1bldkdnBjWjBNOUp2M0p1UGRkWEd4eXFsWDhwQ3BNSTAzL1ZKaStGYjIzVkYKbGQ1bTRXL21EM3kzdmRiQ1pOK2JZQm1id0VOSmxvS1dZU29xS3ovVUJNbGRRTUVZRWFMVXJ1QVpnZUUrYU0wUApTSnN0ZldXUEh2TlhEcHlWTnJneFIwaUVEQnpFc2QyZ1hWOUs5aTJVSGZmSGpGZm5YSWtid0JDbEUrZGxKRElzCkQvS1dZN3hZRDhUNGgrZy8zU2dMVXZEY3RZU1lBRGN2N2x4RWwxbm9RV1pTQXA2aERuZHNLWU9UV1dtM1hycDkKMU56ZG5SeU5qVSt1SDBqVnZiRGYzcWRKNVEvMmtyeS8rVThNSVZQak1jVzgvbzNKdFJ6VVRpOW44NVgwOUR5dQpkM3REN204NXFVZDluTWJmNkFOUnk2bzhQclVDQXdFQUFhT0NBVFF3Z2dFd01Ba0dBMVVkRXdRQ01BQXdFUVlKCllJWklBWWI0UWdFQkJBUURBZ1pBTURNR0NXQ0dTQUdHK0VJQkRRUW1GaVJQY0dWdVUxTk1JRWRsYm1WeVlYUmwKWkNCVFpYSjJaWElnUTJWeWRHbG1hV05oZEdVd0hRWURWUjBPQkJZRUZKRzVhYVQ3YkgwdmIyanZtR0pJcUdjcwpNalZ5TUlHV0JnTlZIU01FZ1k0d2dZdUFGQW5pcnhrWmFQS1lVY05zRktrUmtDdUJiMkROb1c2a2JEQnFNUXN3CkNRWURWUVFHRXdKVlV6RVBNQTBHQTFVRUNBd0dSR1Z1YVdGc01SUXdFZ1lEVlFRSERBdFRjSEpwYm1kbWFXVnMKWkRFTU1Bb0dBMVVFQ2d3RFJHbHpNU1l3SkFZRFZRUUREQjF6YVcxd2JHVXRjMjVwTFRJdWFYTjBhVzlwYm1GagpkR2x2Ymk1cGI0SURFQUlTTUE0R0ExVWREd0VCL3dRRUF3SUZvREFUQmdOVkhTVUVEREFLQmdnckJnRUZCUWNECkFUQU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FnRUFuNlhtaHRYeTFiY3pUMDRveTczV05Ic3dFYldoOE5qcDB4b1MKN1RKQ09NdVhseXlWVHZvUjNtYWp4MzhlU3N0T01TSDRrRHdoNCtuZk9aRXpEM2tzQXgwVUZVRUE1cDVnZythQQplZVJEbjlYNGtZZ1k1VkZpU05ablNzZmxWNS9mbUd0cnFYUi9EUURndERZMzZoYzBVN1U1NGJlNXQybkgzandDCkw0OVBwSmd0VnUvNGpHOFFmL2I5d1BTSG1qTktaa0lCMTBGNHllZUhkK2RXUFFoekhXdVFWUkF0K2tFdkZUTmkKUTVqWjNnVnZMQ01FUW1Jbld2Vlcvdyt2R2N2bGZrSlBmcE1iUVk0L1dpYUlJL09sWS9MQUVJaEU1eE1LM095OApxNWJDM3hOZHZrSStFaTFqVUpWcG1ObFB0Y3Fnb1BSSm83T1NwTnhSeTdlakxKMU5IeDFJODhKZzA1MHppbHl5CkNXYmpuaTZCM25KbUs5SlUxQWFkRzFiaklza3UrdzhzMXg2U25lMDZtNStqdGVDVkVucEtUUWsvd0dna2k3YXYKT1o1dTVaS2hXQmVxVzNvVEcxTHJsRkVTaGdxVFpENkpFaEJDU1pJQU5hZEQxanNHZ2MwS095NFhMTUlWVVI0agpUd2djWFVva3FlYlFTa01WbElDSExDNDM1Ni9YTkdLRUtCVEJqai9IN1F4TFcvZWRWV28xTk5lb1YxdldXeEhZClQxd0ZvV1pYdVVBVXRpTkcxQlBwQ25nOGNTV1FwQy9kMnA0ZzBLRW12VC9xTlNkZGNkNHl3NjhhOXV0ZEpmSGsKcTArY1FaYVRzWVpNa1I0SUZLN0MwT3JxR0NDL0h6T0M4OC9NRE13TEdaZkVEamtDL1FORkFZYy9nM1dKZkJoSApXN0w5NG0wPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBbisrTlNESFZ0WThPRnk2ZFoyK2x4blF6MG0vY200OTExY2JIS3FWZnlrS2t3alRmCjlVbUw0VnZiZFVXVjNtYmhiK1lQZkxlOTFzSmszNXRnR1p2QVEwbVdncFpoS2lvclA5UUV5VjFBd1JnUm90U3UKNEJtQjRUNW96UTlJbXkxOVpZOGU4MWNPbkpVMnVERkhTSVFNSE1TeDNhQmRYMHIyTFpRZDk4ZU1WK2RjaVJ2QQpFS1VUNTJVa01pd1A4cFpqdkZnUHhQaUg2RC9kS0F0UzhOeTFoSmdBTnkvdVhFU1hXZWhCWmxJQ25xRU9kMndwCmc1TlphYmRldW4zVTNOMmRISTJOVDY0ZlNOVzlzTi9lcDBubEQvYVN2TC81VHd3aFUrTXh4YnoramNtMUhOUk8KTDJmemxmVDBQSzUzZTBQdWJ6bXBSMzJjeHQvb0ExSExxancrdFFJREFRQUJBb0lCQVFDR0VWQTBlWm4wNVNOaApxWERIR1Y4MG1Zb3JXQnZzeHZoM0tIY2lONWl4dXVYVDZuRG1kQzF1endxTEpyYTN4VFFyRWdaZmZNTUZPTlZJCllEM1JtYTgwZUlaVGwyMkI0L0YzUXVwMFJkaVhSTzdidVQrU21hODNPcEt0ZXFkWmRXdU5hOGo5SVRnZGFET0QKZWNPUWRTaVdJUWdjaVdaY0VFR0crWWVaZ2t3U0RvdE9SMlg4N3I2aEdSb0JPWjcxbUtQZk03RkFMU0ViakNDago3VUdGbVdCc3EyY2NzUE5GaC8rRG5pUC82RytXU2RNSXp5ZktNYkR5VHo2QlgwcFljalJQYUpabTk4QkVFUWJSCkJzZjNIRTBRdWU5NmhnOHFCY1Y4cDlkc01JaXNkeWpEOWk4a29ib2JVS2JySVNlMDVTOGpWNHdWb05ST1JQVEUKNkdRS09EV2hBb0dCQU16V3NpQW1DaWdTWDZjR0ZLc3VEVzBMQktVOTlPL20xdlBpUGdjQnkvalA3TitpaEtMTwpURmo5a3hqWEozZ3RkL0pYYmU3ZWxlbE8ydTJWL2xtamE4N3dFOHZObUFqSGlzNTgwd0JyVzh6VGM2YnhhUVlBCldOcS9Lc3BEZllEMXo5T1hCV1lrUm1leWNySjdLZ3Bwa3IxcjZwTi9TQXg2dzB6dDkyV0diRFBaQW9HQkFNZmgKeUJ5a0xVdDdTcitSd3pCTkUzeGQ2Qm9BSlZ1all1RmxyM0t1YzdDTVFqNWU5WUkwRkV6d1JyYnVzM2xMZ2I1MgppUm1oMnpjUEFwZmh5SXVEREt4TmgxUzlQaWNLQnJ1ZVFEbUd3S2ZENnhVZHNpZTFHZjI3SW9jMGhYZjg0Y25NCmRQckE5d2EydGJjQkYzdkFqZ1h1Q3UxNldQOXFtdFB2UXpLYWVJUTlBb0dBQnBOOTlIcEVLVFV0elBidEF2SGYKakhpbUZZZi9yUlFFSXFCSXpZREpRNXVwUnlTNGpXR0NJZmxDRjdJUW1sTWJYclJmMnlOYVBMdERYQTFNdFNRQgorZ3JMRitmcDBaNVdYbnF6YTNnRzRuU0hhZnltR29NNFZ3MThHakpBZlR0bkNLdjRpR2J4dTdLRzdDUDRIWTEyCklJNnVZVDNjMmttMnEybVlYN0lKRjBFQ2dZQlAwTzhSME5WdGdNdzJkMFJVTTFNR3BKRWNZTmFLSTFKRzQwNE8KSTI5N0htY05kT25nbGw5TTRkMjdDdEtNS3dTaVE3ODNoeFI4aGZmcEluWHNqK1l0bjcvY3JMejI1ZUFPWjRFSgo5NjlTenI4KzdWN0kwRjZTblhtS09BVGNCeFU2ZWZSMGRUMnZacUpsYzRBbklKc1Y3eHBaL2pNdnV5Z2NYVHllCkptVGRtUUtCZ1FDY21MT2QzQVBkS0tMNy9lMGkyKy9qQlJOY1UvSE9LSGV4YllCclcrNjI1REFudWdlSVZ0WEwKR1RjSGl1eU5heCt4UjMxT3dJSzJNeEVJR0ptWDRrRFkrUDRaS1AwMnROZkRvVWUwb1lQeGc5eGx5WDFhVkdXUgphc25zVmRVQTZwTFBvQlhMRmVBYkZCdVVZYWJ3V1cyU3BzekpUWnRZU1I1c09pRGhYS1l4eUE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
kind: Secret
metadata:
name: simple-sni-2.istioinaction.io
namespace: istioinaction
type: kubernetes.io/tls
kubectl apply -f ch4/sni/simple-tls-service-2.yaml -n istioinaction
# 결과
service/simple-tls-service-2 created
deployment.apps/simple-tls-service-2 created
secret/simple-sni-2.istioinaction.io created
cat ch4/sni/passthrough-sni-gateway-both.yaml
✅ 출력
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: sni-passthrough-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 31400
name: tcp-sni-1
protocol: TLS
hosts:
- "simple-sni-1.istioinaction.io"
tls:
mode: PASSTHROUGH
- port:
number: 31400
name: tcp-sni-2
protocol: TLS
hosts:
- "simple-sni-2.istioinaction.io"
tls:
mode: PASSTHROUGH
kubectl apply -f ch4/sni/passthrough-sni-gateway-both.yaml -n istioinaction
# 결과
gateway.networking.istio.io/sni-passthrough-gateway configured
(1) passthrough-sni-vs-2.yaml 파일 확인
cat ch4/sni/passthrough-sni-vs-2.yaml
✅ 출력
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: simple-sni-2-vs
spec:
hosts:
- "simple-sni-2.istioinaction.io"
gateways:
- sni-passthrough-gateway
tls:
- match:
- port: 31400
sniHosts:
- simple-sni-2.istioinaction.io
route:
- destination:
host: simple-tls-service-2
port:
number: 80
(2) 적용
kubectl apply -f ch4/sni/passthrough-sni-vs-2.yaml -n istioinaction
# 결과
virtualservice.networking.istio.io/simple-sni-2-vs created
(1) /etc/hosts에 호스트 매핑 추가
echo "127.0.0.1 simple-sni-2.istioinaction.io" | sudo tee -a /etc/hosts
# 결과
127.0.0.1 simple-sni-2.istioinaction.io
(2) TLS 호출
curl https://simple-sni-2.istioinaction.io:30006 \
--cacert ch4/sni/simple-sni-2/2_intermediate/certs/ca-chain.cert.pem
✅ 출력
{
"name": "simple-tls-service-2",
"uri": "/",
"type": "HTTP",
"ip_addresses": [
"10.10.0.17"
],
"start_time": "2025-04-19T15:42:19.312955",
"end_time": "2025-04-19T15:42:19.313110",
"duration": "155.668µs",
"body": "Hello from simple-tls-service-2!!!",
"code": 200
}
kind delete cluster --name myk8s
# 결과
Deleting cluster "myk8s" ...
Deleted nodes: ["myk8s-control-plane"]
sudo vi /etc/hosts
# 아래 내용 삭제
127.0.0.1 webapp.istioinaction.io
127.0.0.1 catalog.istioinaction.io
127.0.0.1 simple-sni-1.istioinaction.io
127.0.0.1 simple-sni-2.istioinaction.io