[쿠버네티스 워크로드] 라이브네스 레디네스 프로브 실습

hi·2023년 7월 31일
0

쿠버네티스

목록 보기
23/60

v2




imkunyoung@master-1:~/yaml/yaml/pods/probe$ nano exec-liveness.yaml

exec-liveness.yaml

apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness
  name: liveness-exec
spec:
  containers:
  - name: liveness
    image: registry.k8s.io/busybox
    args:
    - /bin/sh
    - -c
    - touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600
    livenessProbe:
      exec:
        command:
        - cat
        - /tmp/healthy
      initialDelaySeconds: 5
      periodSeconds: 5

imkunyoung@master-1:~/yaml/yaml/pods/probe$ kubectl create -f exec-liveness.yaml
pod/liveness-exec created




imkunyoung@master-1:~/yaml/yaml/pods/probe$ kubectl get pods
NAME                  READY   STATUS    RESTARTS   AGE
liveness-exec         1/1     Running   0          14s
static-web-master-1   1/1     Running   0          3m9s




imkunyoung@master-1:~/yaml/yaml/pods/probe$ kubectl describe pod liveness-exec
Name:             liveness-exec
Namespace:        default
Priority:         0
Service Account:  default
Node:             worker-3/10.138.0.5
Start Time:       Tue, 01 Aug 2023 01:40:15 +0000
Labels:           test=liveness
Annotations:      <none>
Status:           Running
IP:               10.0.1.242
IPs:
  IP:  10.0.1.242
Containers:
  liveness:
    Container ID:  containerd://fb4f173d2f03f2cf5427ef0cbed2e8cd7d97ea528288ba0cd9f917bd7acc4dfc
    Image:         registry.k8s.io/busybox
    Image ID:      sha256:36a4dca0fe6fb2a5133dc11a6c8907a97aea122613fa3e98be033959a0821a1f
    Port:          <none>
    Host Port:     <none>
    Args:
      /bin/sh
      -c
      touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600
    State:          Running
      Started:      Tue, 01 Aug 2023 01:40:16 +0000
    Ready:          True
    Restart Count:  0
    Liveness:       exec [cat /tmp/healthy] delay=5s timeout=1s period=5s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hnh4r (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-hnh4r:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  68s                default-scheduler  Successfully assigned default/liveness-exec to worker-3
  Normal   Pulling    67s                kubelet            Pulling image "registry.k8s.io/busybox"
  Normal   Pulled     67s                kubelet            Successfully pulled image "registry.k8s.io/busybox" in 159.294806ms (159.309066ms including waiting)
  Normal   Created    67s                kubelet            Created container liveness
  Normal   Started    67s                kubelet            Started container liveness
  Warning  Unhealthy  22s (x3 over 32s)  kubelet            Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
  Normal   Killing    22s                kubelet            Container liveness failed liveness probe, will be restarted

imkunyoung@master-1:~/yaml/yaml/pods/probe$ kubectl get pods
NAME                  READY   STATUS    RESTARTS      AGE
liveness-exec         1/1     Running   3 (13s ago)   3m59s
static-web-master-1   1/1     Running   0             6m54s




imkunyoung@master-1:~/yaml/yaml/pods/probe$ kubectl describe pod liveness-exec
Name:             liveness-exec
Namespace:        default
Priority:         0
Service Account:  default
Node:             worker-3/10.138.0.5
Start Time:       Tue, 01 Aug 2023 01:40:15 +0000
Labels:           test=liveness
Annotations:      <none>
Status:           Running
IP:               10.0.1.242
IPs:
  IP:  10.0.1.242
Containers:
  liveness:
    Container ID:  containerd://6d69c6c00665015c49ce46f981272e61909774e72a8705baba69ddf677f9e97e
    Image:         registry.k8s.io/busybox
    Image ID:      sha256:36a4dca0fe6fb2a5133dc11a6c8907a97aea122613fa3e98be033959a0821a1f
    Port:          <none>
    Host Port:     <none>
    Args:
      /bin/sh
      -c
      touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600
    State:          Running
      Started:      Tue, 01 Aug 2023 01:46:31 +0000
    Last State:     Terminated
      Reason:       Error
      Exit Code:    137
      Started:      Tue, 01 Aug 2023 01:45:16 +0000
      Finished:     Tue, 01 Aug 2023 01:46:31 +0000
    Ready:          True
    Restart Count:  5
    Liveness:       exec [cat /tmp/healthy] delay=5s timeout=1s period=5s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hnh4r (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-hnh4r:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  7m31s                  default-scheduler  Successfully assigned default/liveness-exec to worker-3
  Normal   Pulled     7m30s                  kubelet            Successfully pulled image "registry.k8s.io/busybox" in 159.294806ms (159.309066ms including waiting)
  Normal   Pulled     6m15s                  kubelet            Successfully pulled image "registry.k8s.io/busybox" in 151.603527ms (151.619746ms including waiting)
  Normal   Created    5m (x3 over 7m30s)     kubelet            Created container liveness
  Normal   Pulled     5m                     kubelet            Successfully pulled image "registry.k8s.io/busybox" in 162.705096ms (162.721626ms including waiting)
  Warning  Unhealthy  4m15s (x9 over 6m55s)  kubelet            Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
  Normal   Killing    4m15s (x3 over 6m45s)  kubelet            Container liveness failed liveness probe, will be restarted
  Normal   Pulling    3m45s (x4 over 7m30s)  kubelet            Pulling image "registry.k8s.io/busybox"
  Normal   Started    2m30s (x5 over 7m30s)  kubelet            Started container liveness




imkunyoung@master-1:~/yaml/yaml/pods/probe$ nano http-liveness.yaml

http-liveness.yaml

apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness
  name: liveness-http
spec:
  containers:
  - name: liveness
    image: registry.k8s.io/liveness
    args:
    - /server
    livenessProbe:
      httpGet:
        path: /healthz
        port: 8080
        httpHeaders:
        - name: Custom-Header
          value: Awesome
      initialDelaySeconds: 3
      periodSeconds: 3

imkunyoung@master-1:~/yaml/yaml/pods/probe$ kubectl get pods
NAME                  READY   STATUS             RESTARTS      AGE
liveness-exec         0/1     CrashLoopBackOff   7 (60s ago)   12m
liveness-http         1/1     Running            4 (34s ago)   107s
static-web-master-1   1/1     Running            0             15m




imkunyoung@master-1:~/yaml/yaml/pods/probe$ kubectl describe pod liveness-http
Name:             liveness-http
Namespace:        default
Priority:         0
Service Account:  default
Node:             worker-3/10.138.0.5
Start Time:       Tue, 01 Aug 2023 01:50:59 +0000
Labels:           test=liveness
Annotations:      <none>
Status:           Running
IP:               10.0.1.246
IPs:
  IP:  10.0.1.246
Containers:
  liveness:
    Container ID:  containerd://9816736d566ba73789828b858454e77a1b996af0d7af5cf2a581b22a205512ea
    Image:         registry.k8s.io/liveness
    Image ID:      sha256:554cfcf2aa85635c0b1ae9506f36f50118419766221651e70dfdc94631317b4d
    Port:          <none>
    Host Port:     <none>
    Args:
      /server
    State:          Running
      Started:      Tue, 01 Aug 2023 01:51:54 +0000
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Tue, 01 Aug 2023 01:51:36 +0000
      Finished:     Tue, 01 Aug 2023 01:51:54 +0000
    Ready:          True
    Restart Count:  3
    Liveness:       http-get http://:8080/healthz delay=3s timeout=1s period=3s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jjj8q (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-jjj8q:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  64s                default-scheduler  Successfully assigned default/liveness-http to worker-3
  Normal   Pulled     63s                kubelet            Successfully pulled image "registry.k8s.io/liveness" in 380.197977ms (380.221457ms including waiting)
  Normal   Pulled     45s                kubelet            Successfully pulled image "registry.k8s.io/liveness" in 148.690977ms (148.706358ms including waiting)
  Normal   Created    27s (x3 over 62s)  kubelet            Created container liveness
  Normal   Started    27s (x3 over 62s)  kubelet            Started container liveness
  Normal   Pulled     27s                kubelet            Successfully pulled image "registry.k8s.io/liveness" in 142.810947ms (142.822867ms including waiting)
  Normal   Pulling    9s (x4 over 63s)   kubelet            Pulling image "registry.k8s.io/liveness"
  Warning  Unhealthy  9s (x9 over 51s)   kubelet            Liveness probe failed: HTTP probe failed with statuscode: 500
  Normal   Killing    9s (x3 over 45s)   kubelet            Container liveness failed liveness probe, will be restarted




tcp-liveness-readiness.yaml

apiVersion: v1
kind: Pod
metadata:
  name: goproxy
  labels:
    app: goproxy
spec:
  containers:
  - name: goproxy
    image: registry.k8s.io/goproxy:0.1
    ports:
    - containerPort: 8080
    readinessProbe:
      tcpSocket:
        port: 8080
      initialDelaySeconds: 5
      periodSeconds: 10
    livenessProbe:
      tcpSocket:
        port: 8080
      initialDelaySeconds: 15
      periodSeconds: 20
kubectl apply -f https://k8s.io/examples/pods/probe/tcp-liveness-readiness.yaml
imkunyoung@master-1:~/yaml/yaml/pods/probe$ kubectl describe pod goproxy
Name:             goproxy
Namespace:        default
Priority:         0
Service Account:  default
Node:             worker-4/10.138.0.6
Start Time:       Tue, 01 Aug 2023 01:54:13 +0000
Labels:           app=goproxy
Annotations:      <none>
Status:           Running
IP:               10.0.2.3
IPs:
  IP:  10.0.2.3
Containers:
  goproxy:
    Container ID:   containerd://09b7965990314d41b1ba9e44df893d86d5c822b7d79aaf67f1db5134d66dfbe6
    Image:          registry.k8s.io/goproxy:0.1
    Image ID:       registry.k8s.io/goproxy@sha256:5334c7ad43048e3538775cb09aaf184f5e8acf4b0ea60e3bc8f1d93c209865a5
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Tue, 01 Aug 2023 01:54:14 +0000
    Ready:          True
    Restart Count:  0
    Liveness:       tcp-socket :8080 delay=15s timeout=1s period=20s #success=1 #failure=3
    Readiness:      tcp-socket :8080 delay=5s timeout=1s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kfjxl (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-kfjxl:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  71s   default-scheduler  Successfully assigned default/goproxy to worker-4
  Normal  Pulled     70s   kubelet            Container image "registry.k8s.io/goproxy:0.1" already present on machine
  Normal  Created    70s   kubelet            Created container goproxy
  Normal  Started    69s   kubelet            Started container goproxy









v1

Liveness, Readiness, Startup 프로브

Liveness Probe

  • 컨테이너 살았는지 판단하고 다시 시작하는 기능
  • 컨테이너의 상태를 스스로 판단하여 교착 상태에 빠진 컨테이너를 재시작
  • 버그가 생겨도 높은 가용성을 보임

Readiness Probe

  • 포드가 준비된 상태에 있는지 확인하고 정상 서비스를 시작하는 기능
  • 포드가 적절하게 준비되지 않은 경우 로드밸런싱을 하지 않음

Startup Probe

  • 애플리케이션의 시작 시기 확인하여 가용성을 높이는 기능
  • Liveness와 Readiness의 기능을 비활성화


exec-liveness.yaml

apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness
  name: liveness-exec
spec:
  containers:
  - name: liveness
    image: k8s.gcr.io/busybox
    args:
    - /bin/sh
    - -c
    - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
    livenessProbe:
      exec:
        command:
        - cat
        - /tmp/healthy
      initialDelaySeconds: 5
      periodSeconds: 5
kubectl describe pod liveness-exec

http-liveness.yaml

apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness
  name: liveness-http
spec:
  containers:
  - name: liveness
    image: k8s.gcr.io/liveness
    args:
    - /server
    livenessProbe:
      httpGet:
        path: /healthz
        port: 8080
        httpHeaders:
        - name: Custom-Header
          value: Awesome
      initialDelaySeconds: 3
      periodSeconds: 3

tcp-liveness-readiness.yaml

apiVersion: v1
kind: Pod
metadata:
  name: goproxy
  labels:
    app: goproxy
spec:
  containers:
  - name: goproxy
    image: k8s.gcr.io/goproxy:0.1
    ports:
    - containerPort: 8080
    readinessProbe:
      tcpSocket:
        port: 8080
      initialDelaySeconds: 5
      periodSeconds: 10
    livenessProbe:
      tcpSocket:
        port: 8080
      initialDelaySeconds: 15
      periodSeconds: 20

0개의 댓글