지난 포스팅에서는 1, 2장을 통해 Service Mesh와 Istio 첫걸음에 대해 다뤘습니다.이번 글에서는 Istio의 데이터 플레인을 구성하는 핵심 요소인 Envoy Proxy에 대해 살펴보겠습니다.
Envoy는 Lyft에서 개발한 고성능 L7 프록시입니다. 마이크로서비스 아키텍처(SOA)에서 서비스 간의 통신을 중계하고, 관측 가능성(Observability)을 높이는 데 최적화된 프록시로 널리 사용됩니다.
[리스너 (Listeners)]
[루트(라우트, Routes)]
/catalog일 때, 해당 트래픽을 catalog 클러스터로 전달[클러스터 (Cluster)]

Envoy 트래픽 흐름 : 다운 스트림 → 리스너 → 라우트 → 클러스터 → 업스트림
[서비스 디스커버리 (Service Discovery)]
[로드 밸런싱 (Load Balancing)]
[트래픽 및 요청 라우팅 (Traffic & Request Routing)]
[트래픽 전환 및 섀도잉 (Traffic Shifting & Shadowing)]
[네트워크 복원력 (Network Resilience)]
[HTTP/2와 gRPC 지원]
[메트릭 기반 관찰 가능성]
[분산 트레이싱]
[자동 TLS 종료 및 시작]
[속도 제한]
[확장 가능성]
Envoy 프록시는 JSON 또는 YAML 형식의 설정 파일을 기반으로 동작합니다.
설정 방식은 크게 두 가지로 나뉩니다: 정적 설정(Static Configuration)과 동적 설정(Dynamic Configuration)입니다.
Static Configuration 은 Envoy가 처음 실행될 때 읽어 들이는 설정파일로, 주로 리스너, 라우트, 클러스터, 타임아웃 등의 기본적인 설정을 포함 한다.
주요 특징
예시 코드
static_resources:
listeners:
- name: httpbin-demo
address:
socket_address: { address: 0.0.0.0, port_value: 15001 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
http_filters:
- name: envoy.filters.http.router
route_config:
name: httpbin_local_route
virtual_hosts:
- name: httpbin_local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route:
auto_host_rewrite: true
cluster: httpbin_service
clusters:
- name: httpbin_service
connect_timeout: 5s
type: LOGICAL_DNS
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: httpbin
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: httpbin
port_value: 8000
동적 설정 방식은 Envoy가 재시작 없이 런타임에 실시간으로 구성 업데이트를 수행할 수 있다.
이는 Envoy가 xDS API(Discovery Service)를 사용하여 설정을 동적으로 받아오는 방식입니다.
Envoy의 주요 xDS API
예시 코드
ynamic_resources:
lds_config:
api_config_source:
api_type: GRPC
grpc_services:
- envoy_grpc:
cluster_name: xds_cluster
clusters:
- name: xds_cluster
connect_timeout: 0.25s
type: STATIC
lb_policy: ROUND_ROBIN
http2_protocol_options: {}
hosts:
- socket_address: { address: 127.0.0.3, port_value: 5678 }Istio의 경우 ADS를 통해 전체 설정을 일관되게 관리합니다:
bootstrap:
dynamicResources:
ldsConfig:
ads: {}
cdsConfig:
ads: {}
adsConfig:
apiType: GRPC
grpcServices:
- envoyGrpc:
clusterName: xds-grpc
refreshDelay: 1.000s
staticResources:
clusters:
- name: xds-grpc
type: STRICT_DNS
connectTimeout: 10.000s
hosts:
- socketAddress:
address: istio-pilot.istio-system
portValue: 15010
circuitBreakers:
thresholds:
- maxConnections: 100000
maxPendingRequests: 100000
maxRequests: 100000
http2ProtocolOptions: {}
이 방식은 설정 변경을 실시간으로 적용하고, 경쟁 상태를 방지하기 때문에 대규모 클라우드 네이티브 환경에 적합합니다.
| 방식 | 장점 | 단점 | 적합한 상황 |
|---|---|---|---|
| 정적 설정(Static) | 간단한 구성, 쉬운 관리 | 설정 변경 시 재시작 필요 | 소규모, 변화가 적은 환경 |
| 동적 설정(Dynamic) | 실시간 설정 변경 가능, 높은 유연성 | 설정 및 관리 복잡성 증가 | 대규모 마이크로서비스 환경, 잦은 변경 |
환경과 요구 사항에 따라 적절한 설정 방식을 선택하여 Envoy를 효율적으로 활용할 수 있다.

istiod를 통해 Envoy가 사용하는 xDS API를 구현하여 리스너, 엔드포인트, 클러스터 등 Envoy 설정을 동적으로 관리한다.
istiod를 통해 이러한 보조 인프라를 제공한다. 

docker pull envoyproxy/envoy:v1.19.0
docker pull curlimages/curl
docker pull mccutchen/go-httpbin
# 해당 버전은 arm CPU 미지원
~~docker pull citizenstig/httpbin~~docker imagesdocker run -d -e PORT=8000 --name httpbin mccutchen/go-httpbin
docker psdocker run -it --rm --link httpbin curlimages/curl curl -X GET http://httpbin:8000/headers 
docker run -it --rm envoyproxy/envoy:v1.19.0 envoy
cat ch3/simple.yaml 
docker run --name proxy --link httpbin envoyproxy/envoy:v1.19.0 --config-yaml "$(cat ch3/simple.yaml)"docker logs proxy 
docker run -it --rm --link proxy curlimages/curl curl -X GET http://proxy:15001/headersX-Envoy-Expected-Rq-Timeout-Ms, X-Forwarded-Proto, X-Request-Id 헤더 추가 
docker rm -f proxycat ch3/simple_change_timeout.yaml 
docker run --name proxy --link httpbin envoyproxy/envoy:v1.19.0 --config-yaml "$(cat ch3/simple_change_timeout.yaml)"docker run -it --rm --link proxy curlimages/curl curl -X GET http://proxy:15001/headers 
docker run -it --rm --link proxy curlimages/curl curl -X POST http://proxy:15000/logging?http=debug
docker run -it --rm --link proxy curlimages/curl curl -X GET http://proxy:15001/delay/0.5
docker run -it --rm --link proxy curlimages/curl curl -X GET http://proxy:15001/delay/1
docker run -it --rm --link proxy curlimages/curl curl -X GET http://proxy:15001/delay/2 
docker run -it --rm --link proxy curlimages/curl curl -X GET http://proxy:15000/stats 

docker run -it --rm --link proxy curlimages/curl curl -X GET http://proxy:15000/stats | grep retry 
docker run -it --rm --link proxy curlimages/curl curl -X GET http://proxy:15000/certs 
docker run -it --rm --link proxy curlimages/curl curl -X GET http://proxy:15000/clusters 
docker run -it --rm --link proxy curlimages/curl curl -X GET http://proxy:15000/config_dump 
docker run -it --rm --link proxy curlimages/curl curl -X GET http://proxy:15000/listeners 
docker run -it --rm --link proxy curlimages/curl curl -X POST http://proxy:15000/logging 
docker run -it --rm --link proxy curlimages/curl curl -X GET http://proxy:15000/stats/prometheus 
cat ch3/simple_retry.yaml 
docker run -p 15000:15000 --name proxy --link httpbin envoyproxy/envoy:v1.19.0 --config-yaml "$(cat ch3/simple_retry.yaml)"docker run -it --rm --link proxy curlimages/curl curl -X GET http://proxy:15001/status/500docker run -it --rm --link proxy curlimages/curl curl -X GET http://proxy:15000/stats | grep retry

docker rm -f proxy && docker rm -f httpbin
서비스 외부 노출 과정
api.istioinaction.io/v1/products)를 외부 시스템이 호출하려면, 먼저 도메인 이름(api.istioinaction.io)을 IP 주소로 변환하는 과정이 필요DNS에서 IP 주소를 매핑할 때 주의점
안정적인 서비스 노출 방법

서비스와 가상 IP 구성
prod.istioinaction.io, api.istioinaction.io 두 도메인이 동일한 가상 IP로 대응될 수 있다.리버스 프록시의 요청 처리 방식

가상 호스팅
Host 헤더, HTTP/2는 :authority 헤더를 사용해 요청이 향하는 서비스를 판단한다.Istio의 인그레스와 가상 호스팅
[인그레스 게이트웨이의 역할]
[리버스 프록시로서의 동작]

[구성 요소]

kind create cluster --name myk8s --image kindest/node:v1.23.17 --config - <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30000 # Sample Application (istio-ingrssgateway) HTTP
hostPort: 30000
- containerPort: 30001 # Prometheus
hostPort: 30001
- containerPort: 30002 # Grafana
hostPort: 30002
- containerPort: 30003 # Kiali
hostPort: 30003
- containerPort: 30004 # Tracing
hostPort: 30004
- containerPort: 30005 # Sample Application (istio-ingrssgateway) HTTPS
hostPort: 30005
- containerPort: 30006 # TCP Route
hostPort: 30006
- containerPort: 30007 # New Gateway
hostPort: 30007
*extraMounts: # 해당 부분 생략 가능
- hostPath: /Users/bkshin/istio-in-action/book-source-code-master # 각자 자신의 pwd 경로로 설정
containerPath: /istiobook*
networking:
podSubnet: 10.10.0.0/16
serviceSubnet: 10.200.1.0/24
EOF 
docker ps 
docker exec -it myk8s-control-plane sh -c 'apt update && apt install tree psmisc lsof wget bridge-utils net-tools dnsutils tcpdump ngrep iputils-ping git vim -y' 
helm repo add metrics-server https://kubernetes-sigs.github.io/metrics-server/
helm install metrics-server metrics-server/metrics-server --set 'args[0]=--kubelet-insecure-tls' -n kube-system
kubectl get all -n kube-system -l app.kubernetes.io/instance=metrics-server 
docker exec -it myk8s-control-plane bashtree /istiobook/ -L 1 
export ISTIOV=1.17.8
echo 'export ISTIOV=1.17.8' >> /root/.bashrc
curl -s -L https://istio.io/downloadIstio | ISTIO_VERSION=$ISTIOV sh -cp istio-$ISTIOV/bin/istioctl /usr/local/bin/istioctl
istioctl version --remote=false 
istioctl install --set profile=default -y 
kubectl get istiooperators -n istio-system -o yaml
kubectl get all,svc,ep,sa,cm,secret,pdb -n istio-system
kubectl get cm -n istio-system istio -o yaml
kubectl get crd | grep istio.io | sort 

kubectl apply -f istio-$ISTIOV/samples/addons
kubectl get pod -n istio-systemexitkubectl create ns istioinaction
kubectl label namespace istioinaction istio-injection=enabled
kubectl get ns --show-labels 
NodePort 변경 및 nodeport 지정 변경
externalTrafficPolicy 설정 (ClientIP 수집)
kubectl patch svc -n istio-system istio-ingressgateway -p '{"spec": {"type": "NodePort", "ports": [{"port": 80, "targetPort": 8080, "nodePort": 30000}]}}'
kubectl patch svc -n istio-system istio-ingressgateway -p '{"spec": {"type": "NodePort", "ports": [{"port": 443, "targetPort": 8443, "nodePort": 30005}]}}'
kubectl patch svc -n istio-system istio-ingressgateway -p '{"spec":{"externalTrafficPolicy": "Local"}}'
kubectl describe svc -n istio-system istio-ingressgateway

kubectl patch svc -n istio-system prometheus -p '{"spec": {"type": "NodePort", "ports": [{"port": 9090, "targetPort": 9090, "nodePort": 30001}]}}'
kubectl patch svc -n istio-system grafana -p '{"spec": {"type": "NodePort", "ports": [{"port": 3000, "targetPort": 3000, "nodePort": 30002}]}}'
kubectl patch svc -n istio-system kiali -p '{"spec": {"type": "NodePort", "ports": [{"port": 20001, "targetPort": 20001, "nodePort": 30003}]}}'
kubectl patch svc -n istio-system tracing -p '{"spec": {"type": "NodePort", "ports": [{"port": 80, "targetPort": 16686, "nodePort": 30004}]}}'kubectl get pod -n istio-system -l app=istio-ingressgateway 
docker exec -it myk8s-control-plane istioctl proxy-status 
docker exec -it myk8s-control-plane istioctl proxy-config all deploy/istio-ingressgateway.istio-system
docker exec -it myk8s-control-plane istioctl proxy-config listener deploy/istio-ingressgateway.istio-system
docker exec -it myk8s-control-plane istioctl proxy-config routes deploy/istio-ingressgateway.istio-system 
docker exec -it myk8s-control-plane istioctl proxy-config cluster deploy/istio-ingressgateway.istio-system
docker exec -it myk8s-control-plane istioctl proxy-config endpoint deploy/istio-ingressgateway.istio-system
docker exec -it myk8s-control-plane istioctl proxy-config log deploy/istio-ingressgateway.istio-system
docker exec -it myk8s-control-plane istioctl proxy-config secret deploy/istio-ingressgateway.istio-system 


kubectl exec -n istio-system deploy/istio-ingressgateway -- ps
kubectl exec -n istio-system deploy/istio-ingressgateway -- ps aux 

kubectl exec -n istio-system deploy/istio-ingressgateway -- whoami
kubectl exec -n istio-system deploy/istio-ingressgateway -- id 
개요
webapp.istioinaction.io 를 향하는 트래픽을 허용하는 HTTP 포트를 개방한다.
실습
kubectl stern -n istio-system -l app=istiod 
cat ch4/coolstore-gw.yaml
kubectl -n istioinaction apply -f ch4/coolstore-gw.yaml 
kubectl get gw,vs -n istioinaction 
docker exec -it myk8s-control-plane istioctl proxy-status 
docker exec -it myk8s-control-plane istioctl proxy-config listener deploy/istio-ingressgateway.istio-system 
docker exec -it myk8s-control-plane istioctl proxy-config routes deploy/istio-ingressgateway.istio-system 
kubectl get svc -n istio-system istio-ingressgateway -o jsonpath="{.spec.ports}" | jq 
docker exec -it myk8s-control-plane istioctl proxy-config routes deploy/istio-ingressgateway.istio-system -o json
docker exec -it myk8s-control-plane istioctl proxy-config routes deploy/istio-ingressgateway.istio-system -o json --name http.8080 
개요
webapp.istioinaction.io 로 향하는 트래픽에만 적용 하도록 설정이 필요
실습
cat ch4/coolstore-vs.yaml
kubectl apply -n istioinaction -f ch4/coolstore-vs.yamlkubectl get gw,vs -n istioinaction 
docker exec -it myk8s-control-plane istioctl proxy-status
docker exec -it myk8s-control-plane istioctl proxy-config listener deploy/istio-ingressgateway.istio-system
docker exec -it myk8s-control-plane istioctl proxy-config routes deploy/istio-ingressgateway.istio-system 

docker exec -it myk8s-control-plane istioctl proxy-config routes deploy/istio-ingressgateway.istio-system -o json --name http.8080 
kubectl stern -n istio-system -l app=istiod 
kubectl apply -f services/catalog/kubernetes/catalog.yaml -n istioinaction
kubectl apply -f services/webapp/kubernetes/webapp.yaml -n istioinactionkrew plugins - LINK
kubectl krew install images
kubectl images -n istioinaction

kubectl krew install resource-capacity
kubectl resource-capacity -n istioinaction -c --pod-count
kubectl resource-capacity -n istioinaction -c --pod-count -u 
docker exec -it myk8s-control-plane istioctl proxy-status 
docker exec -it myk8s-control-plane istioctl proxy-config listener deploy/istio-ingressgateway.istio-system
docker exec -it myk8s-control-plane istioctl proxy-config cluster deploy/istio-ingressgateway.istio-system | egrep 'TYPE|istioinaction' 
docker exec -it myk8s-control-plane istioctl proxy-config endpoint deploy/istio-ingressgateway.istio-system | egrep 'ENDPOINT|istioinaction' 
webapp 확인
docker exec -it myk8s-control-plane istioctl proxy-config listener deploy/webapp.istioinaction 
docker exec -it myk8s-control-plane istioctl proxy-config routes deploy/webapp.istioinaction 
docker exec -it myk8s-control-plane istioctl proxy-config cluster deploy/webapp.istioinaction 
docker exec -it myk8s-control-plane istioctl proxy-config endpoint deploy/webapp.istioinaction 
docker exec -it myk8s-control-plane istioctl proxy-config secret deploy/webapp.istioinaction 
catalog 확인
docker exec -it myk8s-control-plane istioctl proxy-config listener deploy/catalog.istioinaction 
docker exec -it myk8s-control-plane istioctl proxy-config routes deploy/catalog.istioinaction 
docker exec -it myk8s-control-plane istioctl proxy-config cluster deploy/catalog.istioinaction 
docker exec -it myk8s-control-plane istioctl proxy-config endpoint deploy/catalog.istioinaction 
docker exec -it myk8s-control-plane istioctl proxy-config secret deploy/catalog.istioinaction 
kubectl scale deployment -n istioinaction webapp --replicas 2
kubectl scale deployment -n istioinaction catalog --replicas 2docker exec -it myk8s-control-plane istioctl proxy-config endpoint deploy/webapp.istioinaction | egrep 'ENDPOINT|istioinaction'
docker exec -it myk8s-control-plane istioctl proxy-config endpoint deploy/catalog.istioinaction | egrep 'ENDPOINT|istioinaction' 
kubectl scale deployment -n istioinaction webapp --replicas 1
kubectl scale deployment -n istioinaction catalog --replicas 1kubectl exec -it deploy/webapp -n istioinaction -c istio-proxy -- curl http://localhost:15000/certs | jq[내용]
[TLS 인증서 기반 통신 흐름]

[실습]
docker exec -it myk8s-control-plane istioctl proxy-config secret deploy/istio-ingressgateway.istio-system 
cat ch4/certs/3_application/private/webapp.istioinaction.io.key.pem # 비밀키(개인키)
cat ch4/certs/3_application/certs/webapp.istioinaction.io.cert.pem !
kubectl create -n istio-system secret tls webapp-credential \
--key ch4/certs/3_application/private/webapp.istioinaction.io.key.pem \
--cert ch4/certs/3_application/certs/webapp.istioinaction.io.cert.pem
kubectl view-secret -n istio-system webapp-credential --all 
cat ch4/coolstore-gw-tls.yaml
kubectl apply -f ch4/coolstore-gw-tls.yaml -n istioinactiondocker exec -it myk8s-control-plane istioctl proxy-config secret deploy/istio-ingressgateway.istio-system 
curl -v -H "Host: webapp.istioinaction.io" https://localhost:30005/api/catalog
cat ch4/certs/2_intermediate/certs/ca-chain.cert.pem
openssl x509 -in ch4/certs/2_intermediate/certs/ca-chain.cert.pem -noout -text 
curl -v -H "Host: webapp.istioinaction.io" https://localhost:30005/api/catalog \
--cacert ch4/certs/2_intermediate/certs/ca-chain.cert.pem
echo "127.0.0.1 webapp.istioinaction.io" | sudo tee -a /etc/hosts
cat /etc/hosts | tail -n 1 
curl -v https://webapp.istioinaction.io:30005/api/catalog \
--cacert ch4/certs/2_intermediate/certs/ca-chain.cert.pe 
open https://webapp.istioinaction.io:30005


http 접속 확인
open http://webapp.istioinaction.io:30000

cat ch4/coolstore-gw-tls-redirect.yaml 
kubectl apply -f ch4/coolstore-gw-tls-redirect.yamlcurl -v http://webapp.istioinaction.io:30000/api/catalog 

클라이언트가 자신의 인증서를 서버에 제공하고,
서버는 해당 인증서가 신뢰할 수 있는 CA에 의해 서명되었는지 확인 한다.
클라이언트와 서버는 서로 인증하고, 그 뒤 트래픽은 암호화된 채널로 교환 됨

cat ch4/certs/3_application/private/webapp.istioinaction.io.key.pem
cat ch4/certs/3_application/certs/webapp.istioinaction.io.cert.pem
cat ch4/certs/2_intermediate/certs/ca-chain.cert.pem
openssl x509 -in ch4/certs/2_intermediate/certs/ca-chain.cert.pem -noout -text 


kubectl create -n istio-system secret \
generic webapp-credential-mtls --from-file=tls.key=\
ch4/certs/3_application/private/webapp.istioinaction.io.key.pem \
--from-file=tls.crt=\
ch4/certs/3_application/certs/webapp.istioinaction.io.cert.pem \
--from-file=ca.crt=\
ch4/certs/2_intermediate/certs/ca-chain.cert.pemkubectl view-secret -n istio-system webapp-credential-mtls --all 
cat ch4/coolstore-gw-mtls.yaml 
kubectl apply -f ch4/coolstore-gw-mtls.yaml -n istioinactionkubectl stern -n istio-system -l app=istiod 
docker exec -it myk8s-control-plane istioctl proxy-config secret deploy/istio-ingressgateway.istio-system

curl -v https://webapp.istioinaction.io:30005/api/catalog \
--cacert ch4/certs/2_intermediate/certs/ca-chain.cert.pem
curl -v https://webapp.istioinaction.io:30005/api/catalog \
--cacert ch4/certs/2_intermediate/certs/ca-chain.cert.pem \
--cert ch4/certs/4_client/certs/webapp.istioinaction.io.cert.pem \
--key ch4/certs/4_client/private/webapp.istioinaction.io.key.pem
webapp.istioinaction.io 와 catalog.istioinaction.io 서비스를 둘 다 추가할 수 있다.cat ch4/certs2/3_application/private/catalog.istioinaction.io.key.pem
cat ch4/certs2/3_application/certs/catalog.istioinaction.io.cert.pem
openssl x509 -in ch4/certs2/3_application/certs/catalog.istioinaction.io.cert.pem -noout -text 

kubectl create -n istio-system secret tls catalog-credential \
--key ch4/certs2/3_application/private/catalog.istioinaction.io.key.pem \
--cert ch4/certs2/3_application/certs/catalog.istioinaction.io.cert.pemkubectl apply -f ch4/coolstore-gw-multi-tls.yaml -n istioinactioncat ch4/catalog-vs.yaml
kubectl apply -f ch4/catalog-vs.yaml -n istioinaction
kubectl get gw,vs -n istioinaction 

echo "127.0.0.1 catalog.istioinaction.io" | sudo tee -a /etc/hosts
cat /etc/hosts | tail -n 2 
curl -v https://webapp.istioinaction.io:30005/api/catalog \
--cacert ch4/certs/2_intermediate/certs/ca-chain.cert.pem 
curl -v https://catalog.istioinaction.io:30005/items \
--cacert ch4/certs2/2_intermediate/certs/ca-chain.cert.pem 
kubectl apply -f ch4/echo.yaml -n istioinactionkubectl get pod -n istioinactionkubectl get pod -n istioinaction 
KUBE_EDITOR="vi" kubectl edit svc istio-ingressgateway -n istio-system
# 해당 항목 추가
- name: tcp
nodePort: 30006
port: 31400
protocol: TCP
targetPort: 31400kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.spec.ports[?(@.name=="tcp")]}' 
cat ch4/gateway-tcp.yaml
kubectl apply -f ch4/gateway-tcp.yaml -n istioinaction 
kubectl get gw -n istioinaction 
cat ch4/echo-vs.yaml
kubectl apply -f ch4/echo-vs.yaml -n istioinaction 
kubectl get vs -n istioinaction 
telnet localhost 30006 