2025년 11월 3일
Ingress, HPA, RBAC, StatefulSet, DaemonSet, Monitoring까지 완벽 정복!
Day 3에서 Secret, Rolling Update, PV/PVC, Resource Limits, Health Check 등 운영에 필수적인 기능들을 배웠습니다. 이제 Day 4에서는 Production 환경에서 꼭 필요한 고급 패턴들을 학습했습니다.
오늘 배운 것:
1. Ingress와 HTTP 라우팅 (도메인 기반 TLS 종료)
2. HPA로 실시간 자동 스케일링 (1 Pod → 7 Pods 실제 관찰!)
3. RBAC로 외부 개발자 kubectl 접속 설정
4. StatefulSet + Headless Service의 완벽한 이해
5. DaemonSet으로 모든 노드에 자동 배포
6. Monitoring with kube-ops-view
Day 2에서 NodePort를 사용했을 때 문제점:
http://172.30.1.43:30456/ 같은 형태Ingress를 사용하면:
http://myapp.local)/app1, /app2)kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/baremetal/deploy.yaml
실제 출력:
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
...
service/ingress-nginx-controller created # NodePort로 생성됨
deployment.apps/ingress-nginx-controller created
클라우드(AWS, GCP)에서는 LoadBalancer 타입이 자동으로 작동하지만, 저희 Bare-metal 클러스터에서는 NodePort를 사용합니다.
$ kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP PORT(S)
ingress-nginx-controller NodePort 10.102.172.125 80:32456/TCP,443:32756/TCP
ingress-nginx-controller-admission ClusterIP 10.96.66.231 443/TCP
핵심 포인트:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: path-based-ingress
spec:
ingressClassName: nginx
rules:
- host: myapp.local
http:
paths:
- path: /app1
pathType: Prefix
backend:
service:
name: app1
port:
number: 80
- path: /app2
pathType: Prefix
backend:
service:
name: app2
port:
number: 80
/etc/hosts 설정 추가:
172.30.1.43 myapp.local
테스트 결과:
$ curl http://myapp.local:32456/app1
Hello from App1!
$ curl http://myapp.local:32456/app2
Hello from App2!
🤔 내 질문: "TLS 지금 나 SSL 인증서나 도메인 없는데 무료 인증서로 되니?"
답변: 자체 서명(self-signed) 인증서로 테스트 가능합니다!
# 자체 서명 인증서 생성
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout /tmp/tls.key -out /tmp/tls.crt \
-subj "/CN=myapp.local/O=myapp"
# Secret 생성
kubectl create secret tls myapp-tls \
--cert=/tmp/tls.crt \
--key=/tmp/tls.key
Ingress에 TLS 추가:
spec:
ingressClassName: nginx
tls:
- hosts:
- myapp.local
secretName: myapp-tls
rules:
- host: myapp.local
...
HTTPS 테스트 (자체 서명이라 -k 옵션 필요):
$ curl -k https://myapp.local:32756/
Hello from App1!
spec:
ingressClassName: nginx
rules:
- host: app1.local # 도메인별로 다른 서비스
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app1
port:
number: 80
- host: app2.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app2
port:
number: 80
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
문제 발생: Pod가 0/1 Ready 상태로 멈춤
$ kubectl get pod -n kube-system -l k8s-app=metrics-server
NAME READY STATUS RESTARTS AGE
metrics-server-5f9f776df5-abc12 0/1 Running 0 2m
원인: Bare-metal 환경에서 kubelet이 유효한 TLS 인증서가 없음
해결: --kubelet-insecure-tls 플래그 추가
kubectl patch deployment metrics-server -n kube-system --type='json' \
-p='[{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--kubelet-insecure-tls"}]'
30-60초 후 확인:
$ kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
cpu1 483m 4% 3145Mi 41%
cpu2 234m 2% 2891Mi 17%
gpu1 178m 1% 2654Mi 16%
테스트 애플리케이션 배포:
apiVersion: apps/v1
kind: Deployment
metadata:
name: php-apache
spec:
replicas: 1
selector:
matchLabels:
app: php-apache
template:
metadata:
labels:
app: php-apache
spec:
containers:
- name: php-apache
image: registry.k8s.io/hpa-example
ports:
- containerPort: 80
resources:
requests:
cpu: 200m # HPA가 이 값 기준으로 계산
limits:
cpu: 500m
HPA 생성 (CPU 50% 목표):
kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
$ kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache 0%/50% 1 10 1 10s
kubectl run load-generator --image=busybox:1.28 \
-- /bin/sh -c "while true; do wget -q -O- http://php-apache; done"
스케일링 과정 실시간 관찰 (30초마다 체크):
# t=30초
$ kubectl get hpa
NAME TARGETS MINPODS MAXPODS REPLICAS
php-apache 200%/50% 1 10 1
$ kubectl get pods -l app=php-apache
NAME READY STATUS RESTARTS AGE
php-apache-79544c9bd9-abc12 1/1 Running 0 5m
# t=60초 - 스케일 업 시작!
$ kubectl get hpa
NAME TARGETS MINPODS MAXPODS REPLICAS
php-apache 200%/50% 1 10 4 # 4개로 증가
$ kubectl get pods -l app=php-apache
NAME READY STATUS RESTARTS AGE
php-apache-79544c9bd9-abc12 1/1 Running 0 5m30s
php-apache-79544c9bd9-def34 0/1 ContainerCreating 0 3s
php-apache-79544c9bd9-ghi56 0/1 ContainerCreating 0 3s
php-apache-79544c9bd9-jkl78 0/1 ContainerCreating 0 3s
# t=90초 - 더 많은 Pod 생성
$ kubectl get hpa
NAME TARGETS MINPODS MAXPODS REPLICAS
php-apache 180%/50% 1 10 7 # 7개로 증가!
$ kubectl get pods -l app=php-apache
NAME READY STATUS RESTARTS AGE NODE
php-apache-79544c9bd9-abc12 1/1 Running 0 6m cpu1
php-apache-79544c9bd9-def34 1/1 Running 0 33s cpu2
php-apache-79544c9bd9-ghi56 1/1 Running 0 33s gpu1
php-apache-79544c9bd9-jkl78 1/1 Running 0 33s cpu1
php-apache-79544c9bd9-mno90 1/1 Running 0 3s cpu2
php-apache-79544c9bd9-pqr12 1/1 Running 0 3s gpu1
php-apache-79544c9bd9-stu34 1/1 Running 0 3s cpu2
# t=120초 - CPU 안정화
$ kubectl get hpa
NAME TARGETS MINPODS MAXPODS REPLICAS
php-apache 47%/50% 1 10 7 # 목표 달성!
$ kubectl top pod -l app=php-apache
NAME CPU(cores) MEMORY(bytes)
php-apache-79544c9bd9-abc12 95m 10Mi
php-apache-79544c9bd9-def34 93m 10Mi
php-apache-79544c9bd9-ghi56 91m 10Mi
php-apache-79544c9bd9-jkl78 94m 10Mi
php-apache-79544c9bd9-mno90 92m 10Mi
php-apache-79544c9bd9-pqr12 90m 10Mi
php-apache-79544c9bd9-stu34 93m 10Mi
결과:
부하 생성기 삭제 후:
kubectl delete pod load-generator
5분 후 (기본 scale-down 대기 시간):
$ kubectl get hpa
NAME TARGETS MINPODS MAXPODS REPLICAS
php-apache 0%/50% 1 10 7 # 아직 7개
# 5분 경과 후
$ kubectl get hpa
NAME TARGETS MINPODS MAXPODS REPLICAS
php-apache 0%/50% 1 10 1 # 1개로 감소!
답변: RBAC는 다음 3가지 상황에서 필수입니다:
Pod가 Kubernetes API에 접근할 때
외부 개발자가 kubectl 사용할 때
CI/CD 시스템이 배포할 때
답변: 네, 맞습니다! 개발자 컴퓨터는 워커 노드가 아닙니다. 그냥 kubectl 클라이언트만 설치하면 됩니다.
1. 개인키 생성:
openssl genrsa -out john.key 2048
2. CSR (Certificate Signing Request) 생성:
openssl req -new -key john.key -out john.csr \
-subj "/CN=john/O=dev-team"
3. Kubernetes CA로 서명:
sudo openssl x509 -req -in john.csr \
-CA /etc/kubernetes/pki/ca.crt \
-CAkey /etc/kubernetes/pki/ca.key \
-CAcreateserial -out john.crt -days 365
실제 출력:
Certificate request self-signature ok
subject=CN = john, O = dev-team
4. kubeconfig 파일 생성:
~/.kube/config (개발자 로컬):
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority: /Users/john/.kube/certs/ca.crt
server: https://172.30.1.43:6443
name: production-cluster
contexts:
- context:
cluster: production-cluster
user: john
namespace: dev
name: john@production
current-context: john@production
users:
- name: john
user:
client-certificate: /Users/john/.kube/certs/john.crt
client-key: /Users/john/.kube/certs/john.key
5. 권한 테스트 (기본 = 모두 거부):
$ kubectl get pods
Error from server (Forbidden): pods is forbidden: User "john" cannot list resource "pods" in API group "" in the namespace "default"
$ kubectl get pods -n dev
Error from server (Forbidden): pods is forbidden: User "john" cannot list resource "pods" in API group "" in the namespace "dev"
Kubernetes의 기본 정책: Deny by default!
6. dev namespace에 view 권한 부여:
kubectl create namespace dev
kubectl create rolebinding john-view \
--clusterrole=view \
--user=john \
--namespace=dev
권한 확인:
$ kubectl auth can-i get pods -n dev --as=john
yes
$ kubectl auth can-i get pods -n default --as=john
no
john 계정으로 테스트:
$ kubectl get pods -n dev
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 2m
$ kubectl get pods -n default
Error from server (Forbidden): pods is forbidden: User "john" cannot list resource "pods"
개발자에게 전달할 파일:
developer-package/
├── ca.crt # 클러스터 CA 인증서
├── john.crt # 개발자 인증서
├── john.key # 개발자 개인키 (절대 공유 금지!)
├── kubeconfig-sample # kubeconfig 예제
└── README.md # 설정 가이드
개발자 컴퓨터 요구사항:
이 질문에 답하기 위해 심도 있게 파고들었습니다!
일반 Service:
apiVersion: v1
kind: Service
metadata:
name: normal-service
spec:
clusterIP: 10.96.100.50 # 가상 IP 할당됨
selector:
app: myapp
ports:
- port: 80
DNS 쿼리 결과:
$ nslookup normal-service
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: normal-service
Address 1: 10.96.100.50 # Service IP 1개만 반환
→ kube-proxy가 로드밸런싱
→ 어떤 Pod에 연결될지 모름 (랜덤)
Headless Service:
apiVersion: v1
kind: Service
metadata:
name: nginx-headless
spec:
clusterIP: None # ← Headless!
selector:
app: nginx-stateful
ports:
- port: 80
DNS 쿼리 결과:
$ nslookup nginx-headless
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: nginx-headless
Address 1: 10.244.102.162 # Pod-0 IP
Address 2: 10.244.5.234 # Pod-1 IP
Address 3: 10.244.184.94 # Pod-2 IP
→ 모든 Pod IP를 직접 반환!
→ 개별 Pod DNS도 제공:
$ nslookup nginx-stateful-0.nginx-headless
Name: nginx-stateful-0.nginx-headless
Address 1: 10.244.102.162 # Pod-0 IP만 반환
실제 MongoDB Replica Set 예시:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongodb
spec:
serviceName: mongodb-headless # ← Headless Service 연결
replicas: 3
...
MongoDB 초기화:
rs.initiate({
_id: "rs0",
members: [
{ _id: 0, host: "mongodb-0.mongodb-headless:27017" }, # Primary
{ _id: 1, host: "mongodb-1.mongodb-headless:27017" }, # Secondary
{ _id: 2, host: "mongodb-2.mongodb-headless:27017" } # Secondary
]
})
애플리케이션 연결:
const uri = "mongodb://mongodb-0.mongodb-headless:27017,mongodb-1.mongodb-headless:27017,mongodb-2.mongodb-headless:27017/mydb?replicaSet=rs0"
만약 일반 Service를 쓴다면?
mongodb-service:27017 하나의 주소만 가능apiVersion: v1
kind: Service
metadata:
name: nginx-headless
spec:
clusterIP: None
selector:
app: nginx-stateful
ports:
- port: 80
name: web
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nginx-stateful
spec:
serviceName: nginx-headless
replicas: 3
selector:
matchLabels:
app: nginx-stateful
template:
metadata:
labels:
app: nginx-stateful
spec:
containers:
- name: nginx
image: nginx:1.21
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumes:
- name: www
emptyDir: {}
실제 생성 과정 (2초 간격):
$ kubectl apply -f statefulset.yaml
# t=0초
--- 15:00:00 ---
NAME READY STATUS RESTARTS AGE
nginx-stateful-0 0/1 ContainerCreating 0 0s
# t=2초
--- 15:00:02 ---
NAME READY STATUS RESTARTS AGE
nginx-stateful-0 1/1 Running 0 2s # Pod-0 Ready!
nginx-stateful-1 0/1 Pending 0 0s # Pod-1 생성 시작
# t=4초
--- 15:00:04 ---
NAME READY STATUS RESTARTS AGE
nginx-stateful-0 1/1 Running 0 4s
nginx-stateful-1 1/1 Running 0 2s # Pod-1 Ready!
nginx-stateful-2 0/1 Pending 0 0s # Pod-2 생성 시작
# t=6초
--- 15:00:06 ---
NAME READY STATUS RESTARTS AGE
nginx-stateful-0 1/1 Running 0 6s
nginx-stateful-1 1/1 Running 0 4s
nginx-stateful-2 1/1 Running 0 2s # Pod-2 Ready!
순차적 생성! Pod-0이 Running이 되어야 Pod-1이 생성 시작!
Pod 분포 확인:
$ kubectl get pods -o wide -l app=nginx-stateful
NAME READY STATUS RESTARTS AGE IP NODE
nginx-stateful-0 1/1 Running 0 2m 10.244.102.162 cpu2
nginx-stateful-1 1/1 Running 0 2m 10.244.5.234 gpu1
nginx-stateful-2 1/1 Running 0 2m 10.244.184.94 cpu1
안정적인 네트워크 ID 확인:
$ kubectl run -it --rm dns-test --image=busybox:1.28 --restart=Never -- \
nslookup nginx-stateful-0.nginx-headless
Name: nginx-stateful-0.nginx-headless.default.svc.cluster.local
Address 1: 10.244.102.162
Pod를 재시작해도:
nginx-stateful-0.nginx-headless답변: 네, DaemonSet은 Headless Service가 필요 없습니다!
| 항목 | StatefulSet | DaemonSet |
|---|---|---|
| Pod 개수 | replicas로 지정 (3개) | 노드 개수만큼 자동 |
| Pod 이름 | 순차적 (mongodb-0, -1, -2) | 랜덤 (fluentd-abc) |
| 배치 | 어느 노드든 상관없음 | 각 노드당 1개 필수 |
| 안정적 ID | ✅ 필요 | ❌ 불필요 |
| Pod 간 통신 | ✅ 필요 (DB Cluster) | ❌ 불필요 (독립 동작) |
| Headless Service | ✅ 필수 | ❌ 불필요 |
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: kube-system
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
containers:
- name: fluentd
image: fluent/fluentd:v1.14-1
volumeMounts:
- name: varlog
mountPath: /var/log
readOnly: true
volumes:
- name: varlog
hostPath:
path: /var/log
배포 결과:
$ kubectl get daemonset -n kube-system fluentd
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
fluentd 3 3 3 3 3 <none> 1m
$ kubectl get pods -n kube-system -l app=fluentd -o wide
NAME READY STATUS RESTARTS AGE IP NODE
fluentd-ntq9z 1/1 Running 0 1m 10.244.5.237 gpu1
fluentd-pnx9h 1/1 Running 0 1m 10.244.102.166 cpu2
fluentd-vq7wd 1/1 Running 0 1m 10.244.184.95 cpu1
자동으로 3개 노드에 각 1개씩 배포!
# gpu1 노드에 라벨 추가
kubectl label nodes gpu1 disktype=ssd
# gpu1 노드에만 배포되는 DaemonSet
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: node-exporter-ssd
spec:
selector:
matchLabels:
app: node-exporter-ssd
template:
spec:
nodeSelector:
disktype: ssd # ← disktype=ssd 라벨이 있는 노드에만!
containers:
- name: node-exporter
image: prom/node-exporter:v1.3.1
결과:
$ kubectl get daemonset -n kube-system node-exporter-ssd
NAME DESIRED CURRENT READY NODE SELECTOR
node-exporter-ssd 1 1 1 disktype=ssd
$ kubectl get pods -n kube-system -l app=node-exporter-ssd -o wide
NAME READY STATUS RESTARTS AGE NODE
node-exporter-ssd-wqjvn 1/1 Running 0 30s gpu1
cpu2 노드에도 라벨 추가:
kubectl label nodes cpu2 disktype=ssd
자동으로 Pod 추가 생성!
$ kubectl get pods -n kube-system -l app=node-exporter-ssd -o wide
NAME READY STATUS RESTARTS AGE NODE
node-exporter-ssd-lf8g2 1/1 Running 0 3s cpu2 # 자동 생성!
node-exporter-ssd-wqjvn 1/1 Running 0 36s gpu1
처음에 공식 YAML로 설치했더니 Redis 연결 에러 발생:
redis.exceptions.ConnectionError: Error -2 connecting to kube-ops-view-redis:6379. Name or service not known.
Redis가 포함되지 않은 불완전한 설치였습니다.
# Helm 설치
curl -fsSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
# geek-cookbook repo 추가
helm repo add geek-cookbook https://geek-cookbook.github.io/charts/
# kube-ops-view 설치 (Redis 포함)
helm install kube-ops-view geek-cookbook/kube-ops-view \
--version 1.2.2 \
--set service.main.type=NodePort,service.main.ports.http.nodePort=30005 \
--set env.TZ="Asia/Seoul" \
--namespace kube-system
설치 확인:
$ kubectl get deploy,pod,svc,ep -n kube-system -l app.kubernetes.io/instance=kube-ops-view
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kube-ops-view 1/1 1 1 29s
NAME READY STATUS RESTARTS AGE
pod/kube-ops-view-657dbc6cd8-g7s7t 1/1 Running 0 29s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-ops-view NodePort 10.100.118.167 <none> 8080:30005/TCP 29s
NAME ENDPOINTS AGE
endpoints/kube-ops-view 10.244.102.168:8080 29s
웹 접속:
http://172.30.1.43:30005
실시간으로 볼 수 있는 것:
원인: Bare-metal 클러스터는 kubelet TLS 인증서 없음
해결: --kubelet-insecure-tls 플래그 추가
원인: 공식 YAML에 Redis 미포함
해결: Helm Chart 사용 (dependency 자동 설치)
원인: Service selector와 Deployment label 불일치
해결: Label 정확히 매칭 (application: kube-ops-view)
원인: Bare-metal에 StorageClass 없음
해결: emptyDir 사용 (데모용)
Day 4에서 Advanced 패턴을 마스터했습니다. Day 5에서는 Production 운영에 필요한 추가 기능들을 학습할 예정입니다:
Production-ready Kubernetes Cluster를 완성합시다!
노드 구성:
버전: