[K8S] Node 관리법

HYEOB KIM·2022년 7월 17일
1

kubernetes

목록 보기
11/14
post-custom-banner

Node 스케줄링 중단 및 허용

  • 컨테이너를 포함한 Pod는 Node에서 실행됩니다.

  • Node는 Master Node에 의해 관리됩니다.

  • 특정 Node의 스케줄링 중단(cordon) 및 허용(uncordon)하는 방법

% kubectl cordon <노드 이름>
% kubectl uncordon <노드 이름>

실습

특정 Node에 cordon을 명령하면 Node 안에서 실행 중인 Pod에는 영향을 주지 않으면서 앞으로 Node에 Pod를 배치시키지 않도록 할 수 있습니다.

# 현재 Ready 상태인 노드들을 볼 수 있습니다.
% kubectl get node
NAME          STATUS   ROLES                  AGE    VERSION
k8s-master    Ready    control-plane,master   173d   v1.22.4
k8s-worker1   Ready    <none>                 173d   v1.22.4
k8s-worker2   Ready    <none>                 173d   v1.22.4

# k8s-worker2의 스케줄링을 중단합니다.
% kubectl cordon k8s-worker2
node/k8s-worker2 cordoned

# 다시 노드 상태를 살펴보면 k8s-worker2 노드의 상태에 SchedulingDisabled가 추가된 것을 볼 수 있습니다.
% kubectl get node
NAME          STATUS                     ROLES                  AGE    VERSION
k8s-master    Ready                      control-plane,master   173d   v1.22.4
k8s-worker1   Ready                      <none>                 173d   v1.22.4
k8s-worker2   Ready,SchedulingDisabled   <none>                 173d   v1.22.4

k8s-worker2의 스케줄링이 중단된 상태에서 Replicas=4인 deployment를 생성해봅시다. deployment를 생성하게 되면 k8s-worker2에는 스케줄링이 중단되었기 때문에 Pod가 생성되지 않고 k8s-worker1에만 Pod가 4개 생성됩니다.

$ cat > nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

% vim nginx-deployment.yaml

# 아래 내용으로 수정
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 4
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2

% kubectl apply -f nginx-deployment.yaml
deployment.apps/nginx-deployment created

% kubectl get pod -o wide
NAME                               READY   STATUS    RESTARTS       AGE    IP            NODE          NOMINATED NODE   READINESS GATES
eshop-cart-app                     1/1     Running   4 (112d ago)   115d   10.244.1.19   k8s-worker1   <none>           <none>
front-end-8dc556958-fvlpx          1/1     Running   3 (112d ago)   136d   10.244.2.43   k8s-worker2   <none>           <none>
front-end-8dc556958-vcr4s          1/1     Running   4 (112d ago)   136d   10.244.2.44   k8s-worker2   <none>           <none>
nginx-79488c9578-qwnfk             1/1     Running   2 (112d ago)   113d   10.244.1.18   k8s-worker1   <none>           <none>
nginx-79488c9578-xpsvp             1/1     Running   2 (112d ago)   113d   10.244.2.45   k8s-worker2   <none>           <none>
nginx-deployment-877f48f6d-6ff44   1/1     Running   0              5s     10.244.1.25   k8s-worker1   <none>           <none>
nginx-deployment-877f48f6d-fr64s   1/1     Running   0              5s     10.244.1.24   k8s-worker1   <none>           <none>
nginx-deployment-877f48f6d-s2fbg   1/1     Running   0              5s     10.244.1.22   k8s-worker1   <none>           <none>
nginx-deployment-877f48f6d-tq8x6   1/1     Running   0              5s     10.244.1.23   k8s-worker1   <none>           <none>

다시 k8s-worker2의 스케줄링을 허용해봅시다.

% kubectl uncordon k8s-worker2
node/k8s-worker2 uncordoned

% kubectl get node
NAME          STATUS   ROLES                  AGE    VERSION
k8s-master    Ready    control-plane,master   173d   v1.22.4
k8s-worker1   Ready    <none>                 173d   v1.22.4
k8s-worker2   Ready    <none>                 173d   v1.22.4

하지만 이전에 실행되던 Pod가 로드밸런싱되어 마이그레이션되지 않습니다.

% kubectl get pod -o wide
NAME                               READY   STATUS    RESTARTS       AGE    IP            NODE          NOMINATED NODE   READINESS GATES
eshop-cart-app                     1/1     Running   4 (112d ago)   115d   10.244.1.19   k8s-worker1   <none>           <none>
front-end-8dc556958-fvlpx          1/1     Running   3 (112d ago)   136d   10.244.2.43   k8s-worker2   <none>           <none>
front-end-8dc556958-vcr4s          1/1     Running   4 (112d ago)   136d   10.244.2.44   k8s-worker2   <none>           <none>
nginx-79488c9578-qwnfk             1/1     Running   2 (112d ago)   113d   10.244.1.18   k8s-worker1   <none>           <none>
nginx-79488c9578-xpsvp             1/1     Running   2 (112d ago)   113d   10.244.2.45   k8s-worker2   <none>           <none>
nginx-deployment-877f48f6d-6ff44   1/1     Running   0              73s    10.244.1.25   k8s-worker1   <none>           <none>
nginx-deployment-877f48f6d-fr64s   1/1     Running   0              73s    10.244.1.24   k8s-worker1   <none>           <none>
nginx-deployment-877f48f6d-s2fbg   1/1     Running   0              73s    10.244.1.22   k8s-worker1   <none>           <none>
nginx-deployment-877f48f6d-tq8x6   1/1     Running   0              73s    10.244.1.23   k8s-worker1   <none>           <none>

Pod를 2개 삭제하면 스케줄링에 의해 바로 2개가 다시 생성되고 k8s-worker2에 Pod가 배치됩니다.

% kubectl delete pod nginx-deployment-877f48f6d-s2fbg nginx-deployment-877f48f6d-tq8x6
pod "nginx-deployment-877f48f6d-s2fbg" deleted
pod "nginx-deployment-877f48f6d-tq8x6" deleted 

% kubectl get pod -o wide
NAME                               READY   STATUS    RESTARTS       AGE    IP            NODE          NOMINATED NODE   READINESS GATES
eshop-cart-app                     1/1     Running   4 (112d ago)   115d   10.244.1.19   k8s-worker1   <none>           <none>
front-end-8dc556958-fvlpx          1/1     Running   3 (112d ago)   136d   10.244.2.43   k8s-worker2   <none>           <none>
front-end-8dc556958-vcr4s          1/1     Running   4 (112d ago)   136d   10.244.2.44   k8s-worker2   <none>           <none>
nginx-79488c9578-qwnfk             1/1     Running   2 (112d ago)   113d   10.244.1.18   k8s-worker1   <none>           <none>
nginx-79488c9578-xpsvp             1/1     Running   2 (112d ago)   113d   10.244.2.45   k8s-worker2   <none>           <none>
nginx-deployment-877f48f6d-4hxxn   1/1     Running   0              4s     10.244.2.52   k8s-worker2   <none>           <none>
nginx-deployment-877f48f6d-6ff44   1/1     Running   0              101s   10.244.1.25   k8s-worker1   <none>           <none>
nginx-deployment-877f48f6d-fr64s   1/1     Running   0              101s   10.244.1.24   k8s-worker1   <none>           <none>
nginx-deployment-877f48f6d-rzwv4   1/1     Running   0              4s     10.244.2.53   k8s-worker2   <none>           <none>

Pod 비우기 및 삭제

  • 특정 Node에서 실행 중인 Pod 비우기(drain) 및 제거(delete)하는 방법
% kubectl drain <노드 이름> --ignore-daemonsets --force
# --ignore-daemonsets	: DaemonSet-managed pod들은 ignore.
# --force=false			: RC, RS, Job, DaemonSet 또는 StatefulSet에서 관리하지 않는 Pod까지 제거

CNI, kubeproxy 파드는 daemonset 타입으로 동작. 워커 노드 하나 당 하나의 애플리케이션을 보장해주는 것이 daemonset입니다. --ignore-daemonsets 옵션을 주면 daemonset 타입으로 동작하는 Pod들은 비우지 않습니다.

Controller의 제어를 받지 않는 Pod들이 있을 경우 이 Pod들까지 비우고 싶다면 --force 옵션을 이용합니다.

실습

이전에 실습했던 것에서 이어서 진행합니다. k8s-worker2를 drain해줍니다.

% kubectl get pod -o wide
NAME                               READY   STATUS    RESTARTS       AGE    IP            NODE          NOMINATED NODE   READINESS GATES
eshop-cart-app                     1/1     Running   4 (112d ago)   115d   10.244.1.19   k8s-worker1   <none>           <none>
front-end-8dc556958-fvlpx          1/1     Running   3 (112d ago)   136d   10.244.2.43   k8s-worker2   <none>           <none>
front-end-8dc556958-vcr4s          1/1     Running   4 (112d ago)   136d   10.244.2.44   k8s-worker2   <none>           <none>
nginx-79488c9578-qwnfk             1/1     Running   2 (112d ago)   113d   10.244.1.18   k8s-worker1   <none>           <none>
nginx-79488c9578-xpsvp             1/1     Running   2 (112d ago)   113d   10.244.2.45   k8s-worker2   <none>           <none>
nginx-deployment-877f48f6d-4hxxn   1/1     Running   0              4s     10.244.2.52   k8s-worker2   <none>           <none>
nginx-deployment-877f48f6d-6ff44   1/1     Running   0              101s   10.244.1.25   k8s-worker1   <none>           <none>
nginx-deployment-877f48f6d-fr64s   1/1     Running   0              101s   10.244.1.24   k8s-worker1   <none>           <none>
nginx-deployment-877f48f6d-rzwv4   1/1     Running   0              4s     10.244.2.53   k8s-worker2   <none>           <none>

# --ignore-daemonsets 옵션 없이 실행하면 아래와 같은 메시지가 발생합니다.
% kubectl drain k8s-worker2
node/k8s-worker2 cordoned
error: unable to drain node "k8s-worker2" due to error:cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-system/kube-flannel-ds-dvg5d, kube-system/kube-proxy-vd6zl, continuing command...
There are pending nodes to be drained:
 k8s-worker2
cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-system/kube-flannel-ds-dvg5d, kube-system/kube-proxy-vd6zl

% kubectl drain k8s-worker2 --ignore-daemonsets --force
node/k8s-worker2 already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-dvg5d, kube-system/kube-proxy-vd6zl
evicting pod ingress-nginx/ingress-nginx-controller-778574f59b-2q48q
evicting pod default/front-end-8dc556958-fvlpx
evicting pod default/front-end-8dc556958-vcr4s
evicting pod default/nginx-79488c9578-xpsvp
evicting pod default/nginx-deployment-877f48f6d-4hxxn
evicting pod default/nginx-deployment-877f48f6d-rzwv4
evicting pod devops/eshop-order-5f95d86b84-l9mm6
evicting pod devops/eshop-order-5f95d86b84-rpbz4
evicting pod ingress-nginx/appjs-rc-qh7s5
evicting pod ingress-nginx/appjs-rc-xfq72
evicting pod ingress-nginx/appjs-rc-zl77b
evicting pod ingress-nginx/ingress-nginx-admission-create--1-n8btr
pod/ingress-nginx-admission-create--1-n8btr evicted
I0717 13:20:44.157913    1529 request.go:665] Waited for 1.178046036s due to client-side throttling, not priority and fairness, request: GET:https://k8s-master:6443/api/v1/namespaces/ingress-nginx/pods/appjs-rc-xfq72
pod/nginx-deployment-877f48f6d-4hxxn evicted
pod/nginx-deployment-877f48f6d-rzwv4 evicted
pod/front-end-8dc556958-fvlpx evicted
pod/eshop-order-5f95d86b84-l9mm6 evicted
pod/front-end-8dc556958-vcr4s evicted
pod/eshop-order-5f95d86b84-rpbz4 evicted
pod/nginx-79488c9578-xpsvp evicted
pod/ingress-nginx-controller-778574f59b-2q48q evicted
pod/appjs-rc-xfq72 evicted
pod/appjs-rc-zl77b evicted
pod/appjs-rc-qh7s5 evicted
node/k8s-worker2 drained

Node를 조회해보면 k8s-worker2의 상태가 cordon했을 때 처럼 SchedulingDisabled가 추가된 것을 볼 수 있습니다. Pod를 조회해보면 모두 스케줄링이 적용되어 새로운 Pod들이 k8s-worker1에 생성된 것을 확인할 수 있습니다.

% kubectl get node
NAME          STATUS                     ROLES                  AGE    VERSION
k8s-master    Ready                      control-plane,master   173d   v1.22.4
k8s-worker1   Ready                      <none>                 173d   v1.22.4
k8s-worker2   Ready,SchedulingDisabled   <none>                 173d   v1.22.4

% kubectl get pod -o wide
NAME                               READY   STATUS    RESTARTS       AGE     IP            NODE          NOMINATED NODE   READINESS GATES
eshop-cart-app                     1/1     Running   4 (112d ago)   115d    10.244.1.19   k8s-worker1   <none>           <none>
front-end-8dc556958-6z2wx          1/1     Running   0              56s     10.244.1.32   k8s-worker1   <none>           <none>
front-end-8dc556958-mkxmz          1/1     Running   0              56s     10.244.1.30   k8s-worker1   <none>           <none>
nginx-79488c9578-dc9n8             1/1     Running   0              56s     10.244.1.36   k8s-worker1   <none>           <none>
nginx-79488c9578-qwnfk             1/1     Running   2 (112d ago)   113d    10.244.1.18   k8s-worker1   <none>           <none>
nginx-deployment-877f48f6d-6ff44   1/1     Running   0              4m21s   10.244.1.25   k8s-worker1   <none>           <none>
nginx-deployment-877f48f6d-ds25j   1/1     Running   0              56s     10.244.1.27   k8s-worker1   <none>           <none>
nginx-deployment-877f48f6d-fr64s   1/1     Running   0              4m21s   10.244.1.24   k8s-worker1   <none>           <none>
nginx-deployment-877f48f6d-pd5xf   1/1     Running   0              56s     10.244.1.34   k8s-worker1   <none>           <none>

마지막으로 k8s-worker2의 스케줄링을 다시 허용합니다.

% kubectl uncordon k8s-worker2
node/k8s-worker2 uncordoned

CKA 문제 유형

  • 작업 클러스터: k8s

k8s-worker2를 스케줄링 불가능하게 설정하고, 해당 노드에서 실행 중인 모든 Pod를 다른 Node(k8s-worker1)로 Rescheduling 하세요.

  1. k8s-worker2 노드 Drain
% kubectl drain k8s-worker2 --ignore-daemonsets --force
  1. Pod 조회
% kubectl get pod -o wide

Node Taint & Toleration

  • Worker Node에 Taint가 설정된 경우 동일 값의 toleration이 있는 Pod에만 배치됩니다.
  • toleration이 있는 Pod는 동일한 taint가 있는 Node를 포함해 모든 Node에 배치됩니다.
  • 예를 들어 Master Node의 taint값을 조회한 후 deployment를 생성할 때 toleration 속성을 Master Node의 taint값과 똑같이 주게 되면 Master Node에도 Pod가 배치됩니다.

Toleration의 yaml 파일 속성은 아래와 같습니다.

tolerations:
- key: "key1"
  operator: "Equal"
  value: "value1"
  effect: "NoSchedule"

노드의 taint 속성 확인 방법

% kubectl describe node <노드 이름> | grep -i taint
Taints:             key=value:effect

실습

쿠버네티스 공식 문서 - taint & toleration를 참고하세요.

Master Node의 taint값을 조회한 후 deployment를 생성할 때 toleration 속성을 Master Node의 taint값과 똑같이 주게 되면 Master Node에도 Pod가 배치됩니다.

% kubectl config use-context hk8s

% kubectl describe node hk8s-m | grep -i taint
Taints:             node-role.kubernetes.io/master:NoSchedule

% kubectl create deployment testdep --image=nginx --replicas=5 --dry-run=client -o yaml > testdep.yaml

% vim testdep.yaml

# 아래 내용처럼 변경
apiVersion: apps/v1
kind: Deployment
metadata:
  name: testdep
spec:
  replicas: 5
  selector:
    matchLabels:
      app: testdep
  template:
    metadata:
      labels:
        app: testdep
    spec:
      tolerations:
      - key: "node-role.kubernetes.io/master"
        operator: "Equal"
        effect: "NoSchedule"
      containers:
      - image: nginx
        name: nginx

% kubectl apply -f testdep.yaml
deployment.apps/testdep created

% kubectl get pod -o wide | grep testdep
testdep-996c54476-r46mp   1/1     Running            0                52s    192.168.47.226   hk8s-m    <none>           <none>
testdep-996c54476-smhhr   1/1     Running            0                52s    192.168.47.227   hk8s-m    <none>           <none>
testdep-996c54476-tntvv   1/1     Running            0                52s    192.168.75.100   hk8s-w1   <none>           <none>
testdep-996c54476-vls75   1/1     Running            0                52s    192.168.75.99    hk8s-w1   <none>           <none>
testdep-996c54476-xlmc5   1/1     Running            0                52s    192.168.75.98    hk8s-w1   <none>           <none>

CKA 문제 유형

  • 작업 클러스터: hk8s

Ready 상태(NoSchedule로 Taint된 node는 제외)인 node를 찾아 그 수를 /var/CKA2022/notaint_ready_node에 기록하세요.

% kubectl get node
NAME      STATUS     ROLES                  AGE    VERSION
hk8s-m    Ready      control-plane,master   173d   v1.22.4
hk8s-w1   Ready      <none>                 173d   v1.22.4
hk8s-w2   Ready   	 <none>                 164d   v1.22.4

Ready 상태인 노드는 hk8s-m, hk8s-w1, hk8s-w2 3개입니다. 이중 Taint의 Effect가 NoSchedule인 노드를 찾아봅시다.

% kubectl describe node hk8s-m | grep -i -e taint -e noschedule
Taints:             node-role.kubernetes.io/master:NoSchedule

% kubectl describe node hk8s-w1 | grep -i -e taint -e noschedule
Taints:             <none>

% kubectl describe node hk8s-w2 | grep -i -e taint -e noschedule
Taints:             node.kubernetes.io/unreachable:NoExecute
                    node.kubernetes.io/unreachable:NoSchedule

Taint의 Effect가 NoSchedule인 노드는 hk8s-m, hk8s-w2 2개입니다. 따라서 답은 1입니다.

% echo "1" > /var/CKA2022/notaint_ready_node
% cat /var/CKA2022/notaint_ready_node
1
profile
Devops Engineer
post-custom-banner

0개의 댓글