[K8s] 파드(Pod) 사용-3

Aiden·2021년 6월 27일
0

K8S Study

목록 보기
6/9
post-thumbnail

1. 상태 확인

1-1. 파드 상태, 종료, restartPolicy

파드의 상태

  • Pending : 포드를 생성하는 요청이 API 서버에 의해 승인됐지만, 어떠한 이유로 인해 아직 실제로 생성되지 않은 상태
  • Running : 포드에 포함된 컨테이너들이 모두 생성돼 포드가 정상적으로 실행된 상태
  • Completed : 포드가 정상적으로 실행돼 종료됐음을 의미. 포드 컨테이너의 init 프로세스가 종료 코드로서 0을 반환한 경우에 해당
  • Error : 포드가 정상적으로 실행되지 않은 상태로 종료됐음을 의미. 포드 컨테이너의 init 프로세스가 0이 아닌 종료 코드를 반환했을 때에 해당
  • Terminating : 포드가 삭제 또는 퇴거(Eviction)되기 위해 삭제 상태에 머물러 있는 경우에 해당

Completed, Error

  • 리눅스의 프로세스 또한 종료될 때 종료 코드를 반환
  • 컨테이너 내부의 프로세스 또한 종료될 때 종료 코드를 반환하는데, 컨테이너의 init 프로세스가 어떠한 값을 반환하느냐에 따라 포드의 상태가 Completed 또는 Error 로 설정

YAML 템플릿(restartPolicy: Always)

  • completed.yaml: sleep 5초 후 종료 코드 0 반환 후 종료
cat <<EOF > completed.yaml
apiVersion: v1
kind: Pod
metadata:
  name: completed-pod
spec:
  containers:
    - name: completed-pod
      image: busybox
      command: ["sh"]
      args: ["-c", "sleep 5 && exit 0"]
EOF
  • Pod 생성 및 확인
[root@master aiden (|kube:default)]# kubectl apply -f completed.yaml && kubectl get pod -w
pod/completed-pod created
NAME            READY   STATUS              RESTARTS   AGE
completed-pod   0/1     ContainerCreating   0          0s
completed-pod   0/1     ContainerCreating   0          1s
completed-pod   1/1     Running             0          4s
completed-pod   0/1     Completed           0          9s
completed-pod   1/1     Running             1          12s
completed-pod   0/1     Completed           1          17s
completed-pod   0/1     CrashLoopBackOff    1          28s
completed-pod   1/1     Running             2          31s
completed-pod   0/1     Completed           2          36s
completed-pod   0/1     CrashLoopBackOff    2          48s
  • restartPolicy 확인
[root@master aiden (|kube:default)]# kubectl get pod completed-pod -o yaml | grep restartPolicy
  restartPolicy: Always

⇒ 실패하는 횟수가 늘어날수록 재시도하는 간격이 지수 형태로 늘어나며 CrashLoopBackoff 상태에 머물게 되는 시간이 길어짐

YAML 템플릿(restartPolicy: OnFailure)

  • onfailure.yaml : restartPolicy: OnFailure, sleep 5초 후 종료 코드 1 반환 후 종료

    cat <<EOF > onfailure.yaml 
    apiVersion: v1
    kind: Pod
    metadata:
      name: completed-pod
    spec:
      restartPolicy: OnFailure
      containers:
        - name: completed-pod
          image: busybox
          command: ["sh"]
          args: ["-c", "sleep 5 && exit 1"]
    EOF
    • Pod 생성 및 확인

      [root@master aiden (|kube:default)]# kubectl apply -f onfailure.yaml && kubectl get pod -w
      pod/completed-pod created
      NAME            READY   STATUS              RESTARTS   AGE
      completed-pod   0/1     ContainerCreating   0          0s
      completed-pod   0/1     ContainerCreating   0          1s
      completed-pod   1/1     Running             0          3s
      completed-pod   0/1     Error               0          9s
      completed-pod   1/1     Running             1          12s
      completed-pod   0/1     Error               1          17s
      completed-pod   0/1     CrashLoopBackOff    1          27s
      completed-pod   1/1     Running             2          31s
      completed-pod   0/1     Error               2          35s
      completed-pod   0/1     CrashLoopBackOff    2          46s

      2. 프로브(Probe

    • liveness Probe : 컨테이너 내부의 애플리케이션이 살아있는지(liveness) 검사한다.

    • readiness Probe : 컨테이너 내부의 애플리케이션이 사용자 요청을 처리할 준비가 됐는지(readiness) 검사한다.

    • Startup Probe : 부하가 큰 컨테이너는 시작 시 오랜 시간이 걸리며 이때 liveness & readiness probe 에 의해 종료 및 실패가 될 수 있다.

      2-1. LivenessProbe

      상태 검사 방법

    • httpGet : HTTP 요청을 전송해 상태를 검사한다. HTTP 요청의 종료 코드가 200 또는 300번 계열이 아닌 경우 애플리케이션의 상태 검사가 실패한 것으로 간주한다.

    • tcpSocket : TCP 연결이 수립될 수 있는지 체크함으로써 상태를 검사한다.

    • exec : 컨테이너 내부에서 명령어를 실행해 상태를 검사한다. 명령어의 종료 코드가 0 이 아닌 경우에 애플리케이션의 상태 검사가 실패한 것으로 간주한다.

      YAML 템플릿

    • livenessprobe.yaml

      cat << EOF > livenessprobe.yaml
      apiVersion: v1
      kind: Pod
      metadata:
        name: livenessprobe
      spec:
        containers:
        - name: livenessprobe
          image: nginx
          livenessProbe: 
            httpGet:     
              port: 80
              path: /index.html
      EOF
    • Pod 생성 및 확인

      [root@master aiden (|kube:default)]# kubectl apply -f livenessprobe.yaml && kubectl get events --sort-by=.metadata.creationTimestamp -w
      pod/livenessprobe created
      ... 생략 ...
      0s          Normal    Scheduled   pod/livenessprobe   Successfully assigned default/livenessprobe to worker1
      0s          Normal    Pulling     pod/livenessprobe   Pulling image "nginx"
      0s          Normal    Pulled      pod/livenessprobe   Successfully pulled image "nginx" in 2.655459739s
      0s          Normal    Created     pod/livenessprobe   Created container livenessprobe
      0s          Normal    Started     pod/livenessprobe   Started container livenessprobe
      
      [root@master aiden (|kube:default)]# kubectl describe pod livenessprobe | grep Liveness
          Liveness:       http-get http://:80/index.html delay=0s timeout=1s period=10s #success=1 #failure=3
    • 상태 검사 실패를 위해 index.html 삭제 후 확인

      [root@master aiden (|kube:default)]# kubectl exec livenessprobe -- rm /usr/share/nginx/html/index.html && kubectl logs livenessprobe -f
      ... 생략 ...
      192.168.1.212 - - [27/Jun/2021:10:32:52 +0000] "GET /index.html HTTP/1.1" 200 612 "-" "kube-probe/1.21" "-"
      192.168.1.212 - - [27/Jun/2021:10:33:02 +0000] "GET /index.html HTTP/1.1" 200 612 "-" "kube-probe/1.21" "-"
      192.168.1.212 - - [27/Jun/2021:10:33:12 +0000] "GET /index.html HTTP/1.1" 200 612 "-" "kube-probe/1.21" "-"
      2021/06/27 10:33:22 [error] 31#31: *16 open() "/usr/share/nginx/html/index.html" failed (2: No such file or directory), client: 192.168.1.212, server: localhost, request: "GET /index.html HTTP/1.1", host: "172.16.235.150:80"
      192.168.1.212 - - [27/Jun/2021:10:33:22 +0000] "GET /index.html HTTP/1.1" 404 153 "-" "kube-probe/1.21" "-"
      2021/06/27 10:33:32 [error] 31#31: *17 open() "/usr/share/nginx/html/index.html" failed (2: No such file or directory), client: 192.168.1.212, server: localhost, request: "GET /index.html HTTP/1.1", host: "172.16.235.150:80"
      192.168.1.212 - - [27/Jun/2021:10:33:32 +0000] "GET /index.html HTTP/1.1" 404 153 "-" "kube-probe/1.21" "-"
      192.168.1.212 - - [27/Jun/2021:10:33:42 +0000] "GET /index.html HTTP/1.1" 404 153 "-" "kube-probe/1.21" "-"
      2021/06/27 10:33:42 [error] 31#31: *18 open() "/usr/share/nginx/html/index.html" failed (2: No such file or directory), client: 192.168.1.212, server: localhost, request: "GET /index.html HTTP/1.1", host: "172.16.235.150:80"
      2021/06/27 10:33:42 [notice] 1#1: signal 3 (SIGQUIT) received, shutting down
      2021/06/27 10:33:42 [notice] 31#31: gracefully shutting down
      2021/06/27 10:33:42 [notice] 31#31: exiting
      2021/06/27 10:33:42 [notice] 31#31: exit
      2021/06/27 10:33:42 [notice] 1#1: signal 17 (SIGCHLD) received from 31
      2021/06/27 10:33:42 [notice] 1#1: worker process 31 exited with code 0
      2021/06/27 10:33:42 [notice] 1#1: exit
    • 상태 재확인

      [root@master aiden (|kube:default)]# kubectl logs livenessprobe -f
      ... 생략 ...
      2021/06/27 10:33:45 [notice] 1#1: using the "epoll" event method
      2021/06/27 10:33:45 [notice] 1#1: nginx/1.21.0
      2021/06/27 10:33:45 [notice] 1#1: built by gcc 8.3.0 (Debian 8.3.0-6) 
      2021/06/27 10:33:45 [notice] 1#1: OS: Linux 5.4.0-74-generic
      2021/06/27 10:33:45 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
      2021/06/27 10:33:45 [notice] 1#1: start worker processes
      2021/06/27 10:33:45 [notice] 1#1: start worker process 31
      192.168.1.212 - - [27/Jun/2021:10:33:52 +0000] "GET /index.html HTTP/1.1" 200 612 "-" "kube-probe/1.21" "-"
      192.168.1.212 - - [27/Jun/2021:10:34:02 +0000] "GET /index.html HTTP/1.1" 200 612 "-" "kube-probe/1.21" "-"
      192.168.1.212 - - [27/Jun/2021:10:34:12 +0000] "GET /index.html HTTP/1.1" 200 612 "-" "kube-probe/1.21" "-"

      2-2. ReadinessProbe

      readiness Probe : 컨테이너 내부의 애플리케이션이 사용자 요청을 처리할 준비가 됐는지(readiness) 검사한다.

      YAML 템플릿

    • readinessprobe-service.yaml

      cat << EOF > readinessprobe-service.yaml
      apiVersion: v1
      kind: Pod
      metadata:
        name: readinessprobe
        labels:
          readinessprobe: first
      spec:
        containers:
        - name: readinessprobe
          image: nginx       
          readinessProbe:
            httpGet:
              port: 80
              path: /
      ---
      apiVersion: v1
      kind: Service
      metadata:
        name: readinessprobe-service
      spec:
        ports:
          - name: nginx
            port: 80
            targetPort: 80
        selector:
          readinessprobe: first
        type: ClusterIP
      EOF
    • Pod 생성 및 확인

      [root@master aiden (|kube:default)]# kubectl apply -f readinessprobe-service.yaml && kubectl get events --sort-by=.metadata.creationTimestamp -w
      ... 생략 ...
      0s          Normal    Scheduled   pod/readinessprobe   Successfully assigned default/readinessprobe to worker2
      0s          Normal    Pulling     pod/readinessprobe   Pulling image "nginx"
      0s          Normal    Pulled      pod/readinessprobe   Successfully pulled image "nginx" in 2.593637347s
      0s          Normal    Created     pod/readinessprobe   Created container readinessprobe
      0s          Normal    Started     pod/readinessprobe   Started container readinessprobe
      [root@master aiden (|kube:default)]# kubectl describe pod readinessprobe | grep Readiness
          Readiness:      http-get http://:80/ delay=0s timeout=1s period=10s #success=1 #failure=3
      [root@master aiden (|kube:default)]# kubectl get service readinessprobe-service -o wide
      NAME                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE   SELECTOR
      readinessprobe-service   ClusterIP   10.103.179.125   <none>        80/TCP    87s   readinessprobe=first
      [root@master aiden (|kube:default)]# kubectl get endpoints readinessprobe-service
      NAME                     ENDPOINTS          AGE
      readinessprobe-service   172.16.189.68:80   113s
  • 서비스 접속 테스트

[root@master aiden (|kube:default)]# curl 10.103.179.125
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
... 생략 ...
  • index.html 삭제 후 상태 확인
[root@master aiden (|kube:default)]# kubectl exec readinessprobe -- rm /usr/share/nginx/html/index.html && kubectl logs readinessprobe -f
... 생략 ...
192.168.1.212 - - [27/Jun/2021:11:04:49 +0000] "GET / HTTP/1.1" 200 612 "-" "kube-probe/1.21" "-"
192.168.1.212 - - [27/Jun/2021:11:04:55 +0000] "GET / HTTP/1.1" 200 612 "-" "kube-probe/1.21" "-"
192.168.1.212 - - [27/Jun/2021:11:05:05 +0000] "GET / HTTP/1.1" 200 612 "-" "kube-probe/1.21" "-"
2021/06/27 11:05:15 [error] 30#30: *4 directory index of "/usr/share/nginx/html/" is forbidden, client: 192.168.1.212, server: localhost, request: "GET / HTTP/1.1", host: "172.16.235.151:80"
192.168.1.212 - - [27/Jun/2021:11:05:15 +0000] "GET / HTTP/1.1" 403 153 "-" "kube-probe/1.21" "-"
2021/06/27 11:05:25 [error] 30#30: *5 directory index of "/usr/share/nginx/html/" is forbidden, client: 192.168.1.212, server: localhost, request: "GET / HTTP/1.1", host: "172.16.235.151:80"
192.168.1.212 - - [27/Jun/2021:11:05:25 +0000] "GET / HTTP/1.1" 403 153 "-" "kube-probe/1.21" "-"
192.168.1.212 - - [27/Jun/2021:11:05:35 +0000] "GET / HTTP/1.1" 403 153 "-" "kube-probe/1.21" "-"
2021/06/27 11:05:35 [error] 30#30: *6 directory index of "/usr/share/nginx/html/" is forbidden, client: 192.168.1.212, server: localhost, request: "GET / HTTP/1.1", host: "172.16.235.151:80"
2021/06/27 11:05:45 [error] 30#30: *7 directory index of "/usr/share/nginx/html/" is forbidden, client: 192.168.1.212, server: localhost, request: "GET / HTTP/1.1", host: "172.16.235.151:80"
[root@master aiden (|kube:default)]# kubectl get pod
NAME             READY   STATUS    RESTARTS   AGE
readinessprobe   0/1     Running   0          2m28s
  • 서비스의 엔드포인트 IP 확인
[root@master aiden (|kube:default)]# kubectl get service readinessprobe-service -o wide
NAME                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE   SELECTOR
readinessprobe-service   ClusterIP   10.103.179.125   <none>        80/TCP    25m   readinessprobe=first

⇒ EXTERNAL-IP 삭제됨

3. init container

  • Init 컨테이너는 포드의 컨테이너 내부에서 애플리케이션이 실행되기 전에 초기화를 수행하는 컨테이너이다.
  • Init 컨테이너는 포드의 애플리케이션 컨테이너와 거의 동일하게 사용할 수 있지만, 포드의 애플리케이션 컨테이너 보다 먼저 실행된다는 점이 다르다.
  • 만약 파드의 초기화 컨테이너가 실패하면, kubelet은 초기화 컨테이너가 성공할 때까지 반복적으로 재시작한다.
  • 초기화 컨테이너는 lifecycle, livenessProbe, readinessProbe 또는 startupProbe 를 지원하지 않는다.
  • 만약 다수의 초기화 컨테이너가 파드에 지정되어 있다면, kubelet은 해당 초기화 컨테이너들을 한 번에 하나씩 실행한다.
  • 따라서 포드의 애플리케이션 컨테이너가 실행되기 전에 특정 작업을 미리 수행하는 용도로 사용할 수 있다.

3-1. init container 생성 및 확인

YAML 템플릿

  • init.yaml
cat << EOF > init.yaml
apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
  labels:
    app: myapp
spec:
  containers:
  - name: myapp-container
    image: busybox:1.28
    command: ['sh', '-c', 'echo The app is running! && sleep 3600']
  initContainers:
  - name: init-myservice
    image: busybox:1.28
    command: ['sh', '-c', "until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for myservice; sleep 2; done"]
  - name: init-mydb
    image: busybox:1.28
    command: ['sh', '-c', "until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done"]
EOF

생성 및 확인

  • 생성
[root@master aiden (|kube:default)]# kubectl apply -f init.yaml && kubectl get pod -w
pod/myapp-pod created
NAME        READY   STATUS     RESTARTS   AGE
myapp-pod   0/1     Init:0/2   0          0s
myapp-pod   0/1     Init:0/2   0          1s
myapp-pod   0/1     Init:0/2   0          7s
  • 확인
[root@master aiden (|kube:default)]# kubectl get pod -o wide
NAME        READY   STATUS     RESTARTS   AGE   IP               NODE      NOMINATED NODE   READINESS GATES
myapp-pod   0/1     Init:0/2   0          64s   172.16.235.183   worker1   <none>           <none>
  • myservice 서비스 생성
cat << EOF | kubectl apply -f - && watch -d "kubectl describe pod myapp-pod | grep Events -A 12"
apiVersion: v1
kind: Service
metadata:
  name: myservice
spec:
  ports:
  - protocol: TCP
    port: 80
    targetPort: 9376
EOF
  • 확인(watch)
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  2m26s  default-scheduler  Successfully assigned default/myapp-pod to worker1
  Normal  Pulling    2m25s  kubelet            Pulling image "busybox:1.28"
  Normal  Pulled     2m19s  kubelet            Successfully pulled image "busybox:1.28" in 5.457684006s
  Normal  Created    2m19s  kubelet            Created container init-myservice
  Normal  Started    2m19s  kubelet            Started container init-myservice
  Normal  Pulled     8s     kubelet            Container image "busybox:1.28" already present on machine
  Normal  Created    8s     kubelet            Created container init-mydb
  Normal  Started    8s     kubelet            Started container init-mydb
  • 확인
[root@master aiden (|kube:default)]# kubectl get pod
NAME        READY   STATUS     RESTARTS   AGE
myapp-pod   0/1     Init:1/2   0          2m38s
  • mydb 서비스 생성
cat << EOF | kubectl apply -f - && watch -d "kubectl describe pod myapp-pod | grep Events -A 12"
apiVersion: v1
kind: Service
metadata:
  name: mydb
spec:
  ports:
  - protocol: TCP
    port: 80
    targetPort: 9377
EOF
  • 확인(watch)
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  4m24s  default-scheduler  Successfully assigned default/myapp-pod to worker1
  Normal  Pulling    4m24s  kubelet            Pulling image "busybox:1.28"
  Normal  Pulled     4m18s  kubelet            Successfully pulled image "busybox:1.28" in 5.457684006s
  Normal  Created    4m18s  kubelet            Created container init-myservice
  Normal  Started    4m18s  kubelet            Started container init-myservice
  Normal  Pulled     2m7s   kubelet            Container image "busybox:1.28" already present on machine
  Normal  Created    2m7s   kubelet            Created container init-mydb
  Normal  Started    2m7s   kubelet            Started container init-mydb
  Normal  Pulled     32s    kubelet            Container image "busybox:1.28" already present on machine
  Normal  Created    32s    kubelet            Created container myapp-container
  • 확인
[root@master aiden (|kube:default)]# kubectl get pod
NAME        READY   STATUS    RESTARTS   AGE
myapp-pod   1/1     Running   0          4m35s

삭제

  • pod 삭제
[root@master aiden (|kube:default)]# kubectl delete pod --all
pod "myapp-pod" deleted

4. ConfigMap, Secret

4-1. ConfigMap

  • 하나의 YAML 내용에 상황(예. 개발, 검증 등)에 따라서는 환경 변수의 값 부분만 다르게 가져갈 수 있다 ⇒ 해당 기능이 없다면 환경 변수 매번 관리 및 별도 YAML 파일 관리가 필요
  • K8S 는 YAML 파일과 설정값(예. 환경 변수 등)을 분리할 수 있는 컨피그맵(ConfigMap)와 시크릿(Secret) 오브젝트를 제공!
  • ConfigMap 리소스는 메타데이터(설정값)를 저장하는 리소스이다.
  • ConfigMap 에 설정값들을 저장해놓고 파드에서는 필요한 정보들을 불러올 수 있다.

ConfigMap 생성 및 확인

  • 생성
kubectl create configmap log-level --from-literal LOG_LEVEL=DEBUG
  • 확인
[root@master aiden (|kube:default)]# kubectl get configmap
NAME               DATA   AGE
kube-root-ca.crt   1      11d
log-level          1      6s
[root@master aiden (|kube:default)]# kubectl describe configmaps log-level
Name:         log-level
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
LOG_LEVEL:
----
DEBUG
Events:  <none>
  • YAML 확인
[root@master aiden (|kube:default)]# kubectl get configmaps log-level -o yaml
apiVersion: v1
data:
  LOG_LEVEL: DEBUG
kind: ConfigMap
metadata:
  creationTimestamp: "2021-06-27T14:05:23Z"
  name: log-level
  namespace: default
  resourceVersion: "1493907"

YAML 템플릿

  • configmap-pod.yaml : envFrom 과 configMapRef - 컨피그맵에 존재하는 모든 키-쌍 값을 가져옴
cat << EOF > configmap-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: configmap-pod
spec:
  containers:
    - name: configmap-pod
      image: busybox
      args: ['tail', '-f', '/dev/null']
      envFrom:
      - configMapRef:
          name: log-level
EOF

생성 및 확인

  • 생성
[root@master aiden (|kube:default)]# kubectl apply -f configmap-pod.yaml
pod/configmap-pod created
  • 확인
[root@master aiden (|kube:default)]# kubectl exec configmap-pod -- env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=configmap-pod
LOG_LEVEL=DEBUG
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
HOME=/root

삭제

  • pod, configmap 삭제
[root@master aiden (|kube:default)]# kubectl delete pod --all && kubectl delete configmaps log-level
pod "configmap-pod" deleted
configmap "log-level" deleted

4-2. Secret

  • 시크릿은 SSH 키, 비밀번호 등과 같이 민감한 정보를 저장하기 위한 용도로 사용되며, 네임스페이스에 종속되는 쿠버네티스 오브젝트이다.
  • 시크릿과 컨피그맵은 사용 방법이 매우 비슷합니다. 컨피그맵에 설정값을 저장했던 것처럼 시크릿 또한 문자열 값 등을 똑같이 저장할 수 있다.
  • secret 리소스를 사용자가 조회할 때 평문으로 바로 조회되지 않고 base64 로 한번 인코딩되어 표시된다. (암호화라고 보기에는 보안에 취약)

Secret 생성 및 확인

  • default 시크릿 확인
[root@master aiden (|kube:default)]# kubectl get secrets
NAME                  TYPE                                  DATA   AGE
default-token-kgbtl   kubernetes.io/service-account-token   3      11d
  • 시크릿 생성
[root@master aiden (|kube:default)]# kubectl create secret generic my-password --from-literal password=1q2w3e4r
secret/my-password created
  • 확인
[root@master aiden (|kube:default)]# kubectl get secrets
NAME                  TYPE                                  DATA   AGE
default-token-kgbtl   kubernetes.io/service-account-token   3      11d
my-password           Opaque                                1      32s
  • my-password 시크릿 확인
[root@master aiden (|kube:default)]# kubectl describe secrets my-password
Name:         my-password
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
password:  8 bytes
[root@master aiden (|kube:default)]# kubectl get secrets my-password -o jsonpath='{.data.password}' ; echo
MXEydzNlNHI=
  • base64 디코딩
[root@master aiden (|kube:default)]# echo MXEydzNlNHI= |base64 -d ;echo
1q2w3e4r

YAML 템플릿

  • secret-pod.yaml
cat << EOF > secret-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: secret-pod
spec:
  containers:
    - name: secret-pod
      image: busybox
      args: ['tail', '-f', '/dev/null']
      envFrom:
      - secretRef:
          name: my-password
EOF

생성 및 확인

  • 생성
[root@master aiden (|kube:default)]# kubectl apply -f secret-pod.yaml
pod/secret-pod created
  • 확인
[root@master aiden (|kube:default)]# kubectl get pod -o wide
NAME         READY   STATUS    RESTARTS   AGE   IP               NODE      NOMINATED NODE   READINESS GATES
secret-pod   1/1     Running   0          37s   172.16.235.182   worker1   <none>           <none>
  • 확인(워커노드)
root@worker1:~# cat /proc/`ps -ef | grep tail | grep -v auto | awk '{print $2}'`/environ
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binHOSTNAME=secret-podpassword=1q2w3e4rKUBERNETES_PORT_443_TCP_PORT=443KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1KUBERNETES_SERVICE_HOST=10.96.0.1KUBERNETES_SERVICE_PORT=443KUBERNETES_SERVICE_PORT_HTTPS=443KUBERNETES_PORT=tcp://10.96.0.1:443KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443KUBERNETES_PORT_443_TCP_PROTO=tcpHOME=/rootroot@

⇒ 시크릿 정보가 워커노드에 그대로 노출된다

삭제

  • pod, secret 삭제
[root@master aiden (|kube:default)]# kubectl delete pod --all && kubectl delete secret my-password
pod "secret-pod" deleted
secret "my-password" deleted
profile
기억이 안되면 기록이라도🐳

0개의 댓글