
해당 스터디는 크리스찬 E. 포스타가 발행한 istio in action 책에 대해 Istio Hands-on-Study를 진행하며 공부한 내용을 정리합니다.
해당 스터디를 진행해주신 CloudNetaStudy 팀 분들에게 감사하단 말씀을 전합니다.
출처 : Istio Hands-on Study 1기
출처 : Istio Hands-on Study 1기
출처 : Istio Hands-on Study 1기
Data Plane
Control Plane
Traffic Management
Security
Observability
정책 제어 및 자동화
istiod는 Istio Service Mesh 1.5 버전 부터 등장한 기능이며 이전 버전에선 pilot, citadel, galley, mixer의 개별 컴포넌트로서 존재했음| 컴포넌트 | 역할 |
|---|---|
Pilot | Envoy 프록시에 라우팅, 정책 정보 전달 (서비스 디스커버리, 트래픽 관리) |
Citadel | mTLS를 위한 인증서 발급 및 관리 |
Galley | 설정 검증 및 변환, 구성을 수집 |
Mixer | 정책 판단, 텔레메트리 수집 (1.5 버전 부턴 비활성화, 1.8 버전 부턴 해당 기능 제거) |
Docker Desktop 설치 - Link
kind 필수 툴 설치
# Install Kind
brew install kind
kind --version
# Install kubectl
brew install kubernetes-cli
kubectl version --client=true
## kubectl -> k 단축키 설정
echo "alias =kubecolor" >> ~/.zshrc
# Install Helm
brew install helm
helm version
권장한 유용 툴 설치
# 툴 설치
brew install krew
brew install kube-ps1
brew install kubectx
# kubectl 출력 시 하이라이트 처리
brew install kubecolor
echo "alias kubectl=kubecolor" >> ~/.zshrc
echo "compdef kubecolor=kubectl" >> ~/.zshrc
# krew 플러그인 설치
kubectl krew install neat stern
kubeconfig 세팅
# 클러스터 배포 전 확인
docker ps
# Create a cluster with kind
kind create cluster
# 클러스터 배포 확인
kind get clusters
kind get nodes
kubectl cluster-info
# 노드 정보 확인
kubectl get node -o wide
# 파드 정보 확인
kubectl get pod -A
kubectl get componentstatuses
# 컨트롤플레인 (컨테이너) 노드 1대가 실행
docker ps
docker images
# kube config 파일 확인
cat ~/.kube/config
혹은
cat $KUBECONFIG # KUBECONFIG 변수 지정 사용 시
# nginx 파드 배포 및 확인 : 컨트롤플레인 노드인데 파드가 배포 될까요?
kubectl run nginx --image=nginx:alpine
kubectl get pod -owide
# 노드에 Taints 정보 확인
kubectl describe node | grep Taints
Taints: <none>
# 클러스터 삭제
kind delete cluster
# kube config 삭제 확인
cat ~/.kube/config
혹은
cat $KUBECONFIG # KUBECONFIG 변수 지정 사용 시
docker ps 
export KUBECONFIG=~/.kube/configgit clone https://github.com/AcornPublishing/istio-in-action
cd istio-in-action/book-source-code-master
kind create cluster --name myk8s --image kindest/node:v1.23.17 --config - <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30000 # Sample Application (istio-ingrssgateway)
hostPort: 30000
- containerPort: 30001 # Prometheus
hostPort: 30001
- containerPort: 30002 # Grafana
hostPort: 30002
- containerPort: 30003 # Kiali
hostPort: 30003
- containerPort: 30004 # Tracing
hostPort: 30004
- containerPort: 30005 # kube-ops-view
hostPort: 30005
extraMounts:
- hostPath: /Users/bkshin/IdeaProjects/istio-study/istio-in-action/book-source-code-master *# 각자 자신의 pwd 경로로 설정*
containerPath: /istiobook
networking:
podSubnet: 10.10.0.0/16
serviceSubnet: 10.200.1.0/24
EOF
kind get nodes --name myk8s
docker ps -a 
docker exec -it myk8s-control-plane sh -c 'apt update && apt install tree psmisc lsof wget bridge-utils net-tools dnsutils tcpdump ngrep iputils-ping git vim -y'helm repo add geek-cookbook https://geek-cookbook.github.io/charts/
helm install kube-ops-view geek-cookbook/kube-ops-view --version 1.2.2 --set service.main.type=NodePort,service.main.ports.http.nodePort=30005 --set env.TZ="Asia/Seoul" --namespace kube-systemkubectl get deploy,pod,svc,ep -n kube-system -l app.kubernetes.io/instance=kube-ops-view 

helm repo add metrics-server https://kubernetes-sigs.github.io/metrics-server/
helm install metrics-server metrics-server/metrics-server --set 'args[0]=--kubelet-insecure-tls' -n kube-systemkubectl get all -n kube-system -l app.kubernetes.io/instance=metrics-server 
Istio 설치 구성요소
출처 : Istio Hands-on Study 1기
docker exec -it myk8s-control-plane bashtree /istiobook/ -L 1 
export ISTIOV=1.17.8
echo 'export ISTIOV=1.17.8' >> /root/.bashrc
curl -s -L https://istio.io/downloadIstio | ISTIO_VERSION=$ISTIOV sh -
tree istio-$ISTIOV -L 2 # sample yaml 포함
cp istio-$ISTIOV/bin/istioctl /usr/local/bin/istioctl
istioctl version --remote=false
istioctl x precheck # 설치 전 k8s 조건 충족 검사
istioctl profile list
istioctl install --set profile=default -ykubectl get all,svc,ep,sa,cm,secret,pdb -n istio-system
kubectl get crd | grep istio.io | sort
istioctl verify-install 

kubectl apply -f istio-$ISTIOV/samples/addons 
kubectl get pod -n istio-system 
exitkubectl get cm -n istio-system istio -o yaml 
출처 : Istio Hands-on Study 1기
kubectl create ns istioinactionkubectl label namespace istioinaction istio-injection=enabled
kubectl get ns --show-labels 
kubectl get mutatingwebhookconfiguration 
kubectl get mutatingwebhookconfiguration *istio-sidecar-injector* -o yaml 
kubectl apply -f services/catalog/kubernetes/catalog.yaml -n istioinaction 
kubectl apply -f services/webapp/kubernetes/webapp.yaml -n istioinaction 
kubectl get pod -n istioinaction 
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: netshoot
spec:
containers:
- name: netshoot
image: nicolaka/netshoot
command: ["tail"]
args: ["-f", "/dev/null"]
terminationGracePeriodSeconds: 0
EOFkubectl exec -it netshoot -- curl -s http://catalog.istioinaction/items/1 | jq 
kubectl exec -it netshoot -- curl -s http://webapp.istioinaction/api/catalog/items/1 | jq 
kubectl port-forward -n istioinaction deploy/webapp 8080:8080open http://localhost:8080 
출처 : Istio Hands-on Study 1기
docker exec -it myk8s-control-plane istioctl proxy-status
docker exec -it myk8s-control-plane istioctl ps 
cat ch2/ingress-gateway.yaml 
kubectl -n istioinaction apply -f ch2/ingress-gateway.yaml 
kubectl get gw,vs -n istioinaction 
docker exec -it myk8s-control-plane istioctl proxy-status 
ISTIOIGW=istio-ingressgateway-996bc6bb6-vwdrc.istio-system
WEBAPP=webapp-7685bcb84-4p9hs.istioinactionistioctl proxy-config 확인
docker exec -it myk8s-control-plane istioctl proxy-config all $ISTIOIGW

docker exec -it myk8s-control-plane istioctl proxy-config all $WEBAPP

docker exec -it myk8s-control-plane istioctl proxy-config listener $ISTIOIGW
docker exec -it myk8s-control-plane istioctl proxy-config route $ISTIOIGW
docker exec -it myk8s-control-plane istioctl proxy-config cluster $ISTIOIGW
docker exec -it myk8s-control-plane istioctl proxy-config endpoint $ISTIOIGW
docker exec -it myk8s-control-plane istioctl proxy-config log $ISTIOIGW 

docker exec -it myk8s-control-plane istioctl proxy-config listener $WEBAPP
docker exec -it myk8s-control-plane istioctl proxy-config route $WEBAPP
docker exec -it myk8s-control-plane istioctl proxy-config cluster $WEBAPP
docker exec -it myk8s-control-plane istioctl proxy-config endpoint $WEBAPP
docker exec -it myk8s-control-plane istioctl proxy-config log $WEBAPP 


docker exec -it myk8s-control-plane istioctl proxy-config secret $ISTIOIGW
docker exec -it myk8s-control-plane istioctl proxy-config secret $WEBAPP 
docker exec -it myk8s-control-plane istioctl proxy-config routes deploy/istio-ingressgateway.istio-system 
kubectl get svc,ep -n istio-system istio-ingressgateway
kubectl patch svc -n istio-system istio-ingressgateway -p '{"spec": {"type": "NodePort", "ports": [{"port": 80, "targetPort": 8080, "nodePort": 30000}]}}'
kubectl get svc -n istio-system istio-ingressgateway 
kubectl patch svc -n istio-system istio-ingressgateway -p '{"spec":{"externalTrafficPolicy": "Local"}}'
kubectl describe svc -n istio-system istio-ingressgateway 
kubectl stern -l app=webapp -n istioinaction
kubectl stern -l app=catalog -n istioinaction 

curl -s http://127.0.0.1:30000/api/catalog | jq
curl -s http://127.0.0.1:30000/api/catalog/items/1 | jq
curl -s http://127.0.0.1:30000/api/catalog -I | head -n 1 
while true; do curl -s http://127.0.0.1:30000/api/catalog/items/1 ; sleep 1; echo; done
while true; do curl -s http://127.0.0.1:30000/api/catalog -I | head -n 1 ; date "+%Y-%m-%d %H:%M:%S" ; sleep 1; echo; done
while true; do curl -s http://127.0.0.1:30000/api/catalog -I | head -n 1 ; date "+%Y-%m-%d %H:%M:%S" ; sleep 0.5; echo; donekubectl patch svc -n istio-system prometheus -p '{"spec": {"type": "NodePort", "ports": [{"port": 9090, "targetPort": 9090, "nodePort": 30001}]}}'
kubectl patch svc -n istio-system grafana -p '{"spec": {"type": "NodePort", "ports": [{"port": 3000, "targetPort": 3000, "nodePort": 30002}]}}'
kubectl patch svc -n istio-system kiali -p '{"spec": {"type": "NodePort", "ports": [{"port": 20001, "targetPort": 20001, "nodePort": 30003}]}}'
kubectl patch svc -n istio-system tracing -p '{"spec": {"type": "NodePort", "ports": [{"port": 80, "targetPort": 16686, "nodePort": 30004}]}}'open http://127.0.0.1:30001 
open http://127.0.0.1:30002 
open http://127.0.0.1:30003 
open http://127.0.0.1:30004 
service : webapp.istioinaction.svc.cluster.local


application 클릭
namespace : istioinaction 선택
webapp Application 클릭


Catalog에 의도적으로 500 에러를 재현하고 retry로 복원력 높이는 테스트를 진행
#!/usr/bin/env bash
if [ $1 == "500" ]; then
POD=$(kubectl get pod | grep catalog | awk '{ print $1 }')
echo $POD
for p in $POD; do
if [ ${2:-"false"} == "delete" ]; then
echo "Deleting 500 rule from $p"
kubectl exec -c catalog -it $p -- curl -X POST -H "Content-Type: application/json" -d '{"active":
false, "type": "500"}' localhost:3000/blowup
else
PERCENTAGE=${2:-100}
kubectl exec -c catalog -it $p -- curl -X POST -H "Content-Type: application/json" -d '{"active":
true, "type": "500", "percentage": '"${PERCENTAGE}"'}' localhost:3000/blowup
echo ""
fi
done
fidocker exec -it myk8s-control-plane bashkubectl config set-context $(kubectl config current-context) --namespace=istioinaction
cat /etc/kubernetes/admin.conf 
500 에러 100% 발생하도록 테스트 진행
cd /istiobook/bin/
chmod +x chaos.sh
./chaos.sh 500 100




./chaos.sh 500 50



cat <<EOF | kubectl -n istioinaction apply -f -
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: catalog
spec:
hosts:
- catalog
http:
- route:
- destination:
host: catalog
retries:
attempts: 3
retryOn: 5xx
perTryTimeout: 2s
EOFkubectl get vs -n istioinaction 
istio 자체에서 실패시 재시도 하므로 빨간색 → 노란색으로 변경
200 OK 전체 대비 퍼센트 증가



특정 사용자 집단만 새 배포된 곳으로 라우팅이 되도록, 릴리즈에 단계별로 접근 할수 있도록 세팅
imageurl 추가
cat <<EOF | kubectl -n istioinaction apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: catalog
version: v2
name: catalog-v2
spec:
replicas: 1
selector:
matchLabels:
app: catalog
version: v2
template:
metadata:
labels:
app: catalog
version: v2
spec:
containers:
- env:
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: SHOW_IMAGE
value: "true"
image: istioinaction/catalog:latest
imagePullPolicy: IfNotPresent
name: catalog
ports:
- containerPort: 3000
name: http
protocol: TCP
securityContext:
privileged: false
EOF
docker exec -it myk8s-control-plane bash
----------------------------------------
cd /istiobook/bin/
./chaos.sh 500 delete
exit
----------------------------------------kubectl get deploy,pod,svc,ep -n istioinaction
kubectl get gw,vs -n istioinaction 


#
cat <<EOF | kubectl -n istioinaction apply -f -
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: catalog
spec:
host: catalog
subsets:
- name: version-v1
labels:
version: v1
- name: version-v2
labels:
version: v2
EOFkubectl get gw,vs,dr -n istioinaction 
while true; do curl -s http://127.0.0.1:30000/api/catalog | jq; date "+%Y-%m-%d %H:%M:%S" ; sleep 1; echo; donev1, v2 분산 접속 확인


cat <<EOF | kubectl -n istioinaction apply -f -
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: catalog
spec:
hosts:
- catalog
http:
- route:
- destination:
host: catalog
subset: version-v1
EOFwhile true; do curl -s http://127.0.0.1:30000/api/catalog | jq; date "+%Y-%m-%d %H:%M:%S" ; sleep 1; echo; doneimageUrl key 가 노출 되지 않음 

cat <<EOF | kubectl -n istioinaction apply -f -
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: catalog
spec:
hosts:
- catalog
http:
- match:
- headers:
x-dark-launch:
exact: "v2"
route:
- destination:
host: catalog
subset: version-v2
- route:
- destination:
host: catalog
subset: version-v1
EOFkubectl get gw,vs,dr -n istioinaction 
while true; do curl -s http://127.0.0.1:30000/api/catalog | jq; date "+%Y-%m-%d %H:%M:%S" ; sleep 1; echo; done

while true; do curl -s http://127.0.0.1:30000/api/catalog -H "x-dark-launch: v2" | jq; date "+%Y-%m-%d %H:%M:%S" ; sleep 1; echo; done

while true; do curl -s http://127.0.0.1:30000/api/catalog | jq; date "+%Y-%m-%d %H:%M:%S" ; sleep 1; echo; done
while true; do curl -s http://127.0.0.1:30000/api/catalog -H "x-dark-launch: v2" | jq; date "+%Y-%m-%d %H:%M:%S" ; sleep 1; echo; done 
kubectl delete deploy,svc,gw,vs,dr --all -n istioinaction && kind delete cluster --name myk8s