kubernetes CKA study (36) - CKA 시험접수 및 killer.sh 풀이

이동명·2024년 1월 12일
0

kubernetes CKA study

목록 보기
36/37
post-thumbnail

개요

cka 시험접수를 하고 killer.sh (리눅스 재단에서 제공해주는 CKA 시험 시뮬레이션 2회분)

오늘은 killer.sh 1회차분을 풀이해보는 포스팅을 해보도록 하겠다.

killer.sh 는 실제 CKA 시험 난이도 보다 어려운 편 이며 실제 시험 환경을 미리 테스트 해볼 수 있기에 꼭 풀어보는게 좋을 것 같다.


Question 1 | Contexts

모든 컨텍스트 이름

You have access to multiple clusters from your main terminal through kubectl contexts. Write all those context names into /opt/course/1/contexts.

-> k config get-contexts -o name > /opt/course/1/contexts

현재 컨텍스트

Next write a command to display the current context into /opt/course/1/context_default_kubectl.sh, the command should use kubectl.

-> echo 'kubectl config current-context' > /opt/course/1/context_default_kubectl.sh

kubectl을 사용하지 않고 현재 컨텍스트 (솔직히 이것까지 알아야되나 싶다)

Finally write a second command doing the same thing into /opt/course/1/context_default_no_kubectl.sh, but without the use of kubectl.

-> cat ~/.kube/config | grep current | sed -e "s/current-context: //"


Question 2 | Schedule Pod on Controlplane Nodes

Create a single Pod of image httpd:2.4.41-alpine in Namespace default. The Pod should be named pod1 and the container should be named pod1-container. This Pod should only be scheduled on controlplane nodes. Do not add new labels to any nodes.

(파드를 만들고 Taint & tolerations 을 이용하여 controlplane 에 배치해야 하는 작업)

해당 노드 taint 확인

k describe node cluster1-controlplane1 | grep Taint

그리고 나서 tolerations 을 걸면 된다.

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod1
  name: pod1
spec:
  containers:
  - image: httpd:2.4.41-alpine
    name:  pod1-container
    resources: {}
  tolerations:
  - key: "node-role.kubernetes.io/control-plane"
    operator: "Exists"
    effect: "NoSchedule"
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

주의할 점은 tolerations 설정을 해주지 않으면 nodeSelector 건 nodeName이건 생성 안됨.


Question 3 | Scale down StatefulSet

There are two Pods named o3db-* in Namespace project-c13. C13 management asked you to scale the Pods down to one replica to save resources.

k -n project-c13 scale sts o3db --replicas 1


Question 4 | Pod Ready if Service is reachable

Do the following in Namespace default. Create a single Pod named ready-if-service-ready of image nginx:1.16.1-alpine. Configure a LivenessProbe which simply executes command true. Also configure a ReadinessProbe which does check if the url http://service-am-i-ready:80 is reachable, you can use wget -T2 -O- http://service-am-i-ready:80 for this. Start the Pod and confirm it isn't ready because of the ReadinessProbe.

(지문대로 파드만들고 livenessProbe와 readinessProbe를 정의해라)

->

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: ready-if-service-ready
  name: ready-if-service-ready
spec:
  containers:
  - image: nginx:1.16.1-alpine
    name: ready-if-service-ready
    resources: {}
    livenessProbe:                                      
      exec:
        command:
        - 'true'
    readinessProbe:
      exec:
        command:
        - sh
        - -c
        - 'wget -T2 -O- http://service-am-i-ready:80'   
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

Create a second Pod named am-i-ready of image nginx:1.16.1-alpine with label id: cross-server-ready. The already existing Service service-am-i-ready should now have that second Pod as endpoint.

(service-am-i-ready 에서 지정하고 있는 조건으로 파드 하나 만들기)

-> k run am-i-ready --image=nginx:1.16.1-alpine --labels="id=cross-server-ready"

Now the first Pod should be in ready state, confirm that.


Question 5 | Kubectl sorting

There are various Pods in all namespaces. Write a command into /opt/course/5/find_pods.sh which lists all Pods sorted by their AGE (metadata.creationTimestamp).

(모든 pod metadata.creationTimestamp기준 정렬)

echo 'kubectl get pods -A --sort-by=metadata.creationTimestamp' > /opt/course/5/find_pods.sh

Write a second command into /opt/course/5/find_pods_uid.sh which lists all Pods sorted by field metadata.uid. Use kubectl sorting for both commands.

(조건 바꿔서 한번 더)

echo 'kubectl get pod -A --sort-by=.metadata.uid' > /opt/course/5/find_pods_uid.sh


Question 6 | Storage, PV, PVC, Pod volume

Create a new PersistentVolume named safari-pv. It should have a capacity of 2Gi, accessMode ReadWriteOnce, hostPath /Volumes/Data and no storageClassName defined.

(storageClassName없이 pv 생성)

kind: PersistentVolume
apiVersion: v1
metadata:
 name: safari-pv
spec:
 capacity:
  storage: 2Gi
 accessModes:
  - ReadWriteOnce
 hostPath:
  path: "/Volumes/Data"

Next create a new PersistentVolumeClaim in Namespace project-tiger named safari-pvc . It should request 2Gi storage, accessMode ReadWriteOnce and should not define a storageClassName. The PVC should bound to the PV correctly.

(pv에 맞게 pvc생성)

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: safari-pvc
  namespace: project-tiger
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
     storage: 2Gi

Finally create a new Deployment safari in Namespace project-tiger which mounts that volume at /tmp/safari-data. The Pods of that Deployment should be of image httpd:2.4.41-alpine.

(Deployment 만들어서 pvc할당)


hell
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: safari
  name: safari
  namespace: project-tiger
spec:
  replicas: 1
  selector:
    matchLabels:
      app: safari
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: safari
    spec:
      volumes:                                      # add
      - name: data                                  # add
        persistentVolumeClaim:                      # add
          claimName: safari-pvc                     # add
      containers:
      - image: httpd:2.4.41-alpine
        name: container
        volumeMounts:                               # add
        - name: data                                # add
          mountPath: /tmp/safari-data               # add

Question 7 | Node and Pod Resource Usage

The metrics-server has been installed in the cluster. Your college would like to know the kubectl commands to:

  1. show Nodes resource usage -> kubectl top node
  2. show Pods and their containers resource usage -> kubectl top pod --containers=true

Please write the commands into /opt/course/7/node.sh and /opt/course/7/pod.sh.

(자원 소모량 체크하기)


Question 8 | Get Controlplane Information

Ssh into the controlplane node with ssh cluster1-controlplane1. Check how the controlplane components kubelet, kube-apiserver, kube-scheduler, kube-controller-manager and etcd are started/installed on the controlplane node. Also find out the name of the DNS application and how it's started/installed on the controlplane node.

(컨트롤플레인 구성요소, DNS 확인해서 지문에 나온 형식대로 적자)

kubelet 은 process -> find /etc/systemd/system/ | grep kube

dns 는 coredns -> kubectl -n kube-system get deploy

나머지 다 static-pod

Write your findings into file /opt/course/8/controlplane-components.txt. The file should be structured like:

/opt/course/8/controlplane-components.txt
kubelet: [TYPE]
kube-apiserver: [TYPE]
kube-scheduler: [TYPE]
kube-controller-manager: [TYPE]
etcd: [TYPE]
dns: [TYPE][NAME]

Choices of [TYPE] are: not-installed, process, static-pod, pod

kubelet: process
kube-apiserver: static-pod
kube-scheduler: static-pod
kube-controller-manager: static-pod
etcd: static-pod
dns: pod coredns


Question 9 | Kill Scheduler, Manual Scheduling

Ssh into the controlplane node with ssh cluster2-controlplane1. Temporarily stop the kube-scheduler, this means in a way that you can start it again afterwards.

Create a single Pod named manual-schedule of image httpd:2.4-alpine, confirm it's created but not scheduled on any node.

Now you're the scheduler and have all its power, manually schedule that Pod on node cluster2-controlplane1. Make sure it's running.

Start the kube-scheduler again and confirm it's running correctly by creating a second Pod named manual-schedule2 of image httpd:2.4-alpine and check if it's running on cluster2-node1.

풀이를 해야하는데 ssh 접속시 password 를 묻네... 해결되면 다시 풀겠음..


Question 10 | RBAC ServiceAccount Role RoleBinding

Create a new ServiceAccount processor in Namespace project-hamster. Create a Role and RoleBinding, both named processor as well. These should allow the new SA to only create Secrets and ConfigMaps in that Namespace.

(sa 만들고 role 만들고 rolebinding 만들기)

sa 만들기

k -n project-hamster create sa processor

role 만들기

kubectl -n project-hamster create role processor --verb=create --resource=secret --resource=configmap

RoleBinding 만들기

kubectl -n project-hamster create rolebinding processor --role processor --serviceaccount project-hamster:processor


Question 11 | DaemonSet on all Nodes

Use Namespace project-tiger for the following. Create a DaemonSet named ds-important with image httpd:2.4-alpine and labels id=ds-important and uuid=18426a0b-5f59-4e10-923f-c0e078e82462. The Pods it creates should request 10 millicore cpu and 10 mebibyte memory. The Pods of that DaemonSet should run on all nodes, also controlplanes.

(DaemonSet 만들고 모든 node에 배치되게끔)

DaemonSet 만들고 tolerations 만 추가하면 됨

 tolerations:                                  # add
 - effect: NoSchedule                          # add
   key: node-role.kubernetes.io/control-plane  # add

Question 12 | Deployment on all Nodes

Use Namespace project-tiger for the following. Create a Deployment named deploy-important with label id=very-important (the Pods should also have this label) and 3 replicas. It should contain two containers, the first named container1 with image nginx:1.17.6-alpine and the second one named container2 with image google/pause.

(지문에 맞게 멀티 컨테이너 구축)

There should be only ever one Pod of that Deployment running on one worker node. We have two worker nodes: cluster1-node1 and cluster1-node2. Because the Deployment has three replicas the result should be that on both nodes one Pod is running. The third Pod won't be scheduled, unless a new worker node will be added.

In a way we kind of simulate the behaviour of a DaemonSet here, but using a Deployment and a fixed number of replicas.

(Deployment지만 DaemonSet 처럼 모든 노드에 하나씩 배치가 되야한다. 만약 안되면 하나의 replicas는 생성이 안 될 것)

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    id: very-important                  # change
  name: deploy-important
  namespace: project-tiger              # important
spec:
  replicas: 3                           # change
  selector:
    matchLabels:
      id: very-important                # change
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        id: very-important              # change
    spec:
      containers:
      - image: nginx:1.17.6-alpine
        name: container1                # change
        resources: {}
      - image: google/pause             # add
        name: container2                # add
      affinity:                                             # add
        podAntiAffinity:                                    # add
          requiredDuringSchedulingIgnoredDuringExecution:   # add
          - labelSelector:                                  # add
              matchExpressions:                             # add
              - key: id                                     # add
                operator: In                                # add
                values:                                     # add
                - very-important                            # add
            topologyKey: kubernetes.io/hostname             # add
status: {}

Question 13 | Multi Containers and Pod shared Volume

Create a Pod named multi-container-playground in Namespace default with three containers, named c1, c2 and c3. There should be a volume attached to that Pod and mounted into every container, but the volume shouldn't be persisted or shared with other Pods.

Container c1 should be of image nginx:1.17.6-alpine and have the name of the node where its Pod is running available as environment variable MY_NODE_NAME.

(환경변수로 현재 배치중인 nodeName)

Container c2 should be of image busybox:1.31.1 and write the output of the date command every second in the shared volume into file date.log. You can use while true; do date >> /your/vol/path/date.log; sleep 1; done for this.

Container c3 should be of image busybox:1.31.1 and constantly send the content of file date.log from the shared volume to stdout. You can use tail -f /your/vol/path/date.log for this.

(나머지는 sidecar 패턴)

Check the logs of container c3 to confirm correct setup.

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: multi-container-playground
  name: multi-container-playground
spec:
  containers:
  - image: nginx:1.17.6-alpine
    name: c1                                                                      # change
    resources: {}
    env:                                                                          # add
    - name: MY_NODE_NAME                                                          # add
      valueFrom:                                                                  # add
        fieldRef:                                                                 # add
          fieldPath: spec.nodeName                                                # add
    volumeMounts:                                                                 # add
    - name: vol                                                                   # add
      mountPath: /vol                                                             # add
  - image: busybox:1.31.1                                                         # add
    name: c2                                                                      # add
    command: ["sh", "-c", "while true; do date >> /vol/date.log; sleep 1; done"]  # add
    volumeMounts:                                                                 # add
    - name: vol                                                                   # add
      mountPath: /vol                                                             # add
  - image: busybox:1.31.1                                                         # add
    name: c3                                                                      # add
    command: ["sh", "-c", "tail -f /vol/date.log"]                                # add
    volumeMounts:                                                                 # add
    - name: vol                                                                   # add
      mountPath: /vol                                                             # add
  dnsPolicy: ClusterFirst
  restartPolicy: Always
  volumes:                                                                        # add
    - name: vol                                                                   # add
      emptyDir: {}                                                                # add
status: {}

Question 14 | Find out Cluster Information

You're ask to find out following information about the cluster k8s-c1-H:

  1. How many controlplane nodes are available?

    k get node

  2. How many worker nodes are available?

    k get node

  3. What is the Service CIDR?

    ssh cluster1-controlplane1
    cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep range

  4. Which Networking (or CNI Plugin) is configured and where is its config file?

    find /etc/cni/net.d/

  5. Which suffix will static pods have that run on cluster1-node1?

    -cluster1-node1


Question 15 | Cluster Event Logging

Write a command into /opt/course/15/cluster_events.sh which shows the latest events in the whole cluster, ordered by time (metadata.creationTimestamp). Use kubectl for it.

(전체 클러스터의 최신 이벤트를 시간순 정렬)

kubectl get events -A --sort-by=.metadata.creationTimestamp

Now delete the kube-proxy Pod running on node cluster2-node1 and write the events this caused into /opt/course/15/pod_kill.log.

Finally kill the containerd container of the kube-proxy Pod on node cluster2-node1 and write the events into /opt/course/15/container_kill.log.

Do you notice differences in the events both actions caused?


Question 18 | Fix Kubelet

Use context: kubectl config use-context k8s-c3-CCC

There seems to be an issue with the kubelet not running on cluster3-node1. Fix it and confirm that cluster has node cluster3-node1 available in Ready state afterwards. You should be able to schedule a Pod on cluster3-node1 afterwards.

Write the reason of the issue into /opt/course/18/reason.txt.

(kubelet 고장남 이유가 뭘까?)

노드 이슈 확인

k get node

이슈의 노드로 접근

ssh cluster3-node1

kubelet loaded 상태확인

service kubelet status

kubelet 재시작

service kubelet start

에러를 뿜음

Apr 30 22:03:10 cluster3-node1 systemd[5989]: kubelet.service: Failed at step EXEC spawning /usr/local/bin/kubelet: No such file or directory
Apr 30 22:03:10 cluster3-node1 systemd[1]: kubelet.service: Main process exited, code=exited, status=203/EXEC
Apr 30 22:03:10 cluster3-node1 systemd[1]: kubelet.service: Failed with result 'exit-code'.

디렉토리관련 에러가 나와서 확인해봄

/usr/local/bin/kubelet

-bash: /usr/local/bin/kubelet: No such file or directory

그런 디렉토리 없다고 나옴; 진짜 kubelet 디렉토리 확인해봄

whereis kubelet

여기로 나옴

kubelet: /usr/bin/kubelet

kubelet config파일에서 경로 수정해줌

vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

재시작

systemctl daemon-reload && systemctl restart kubelet

이슈의 사유를 기재해서 저장

echo 'wrong path to kubelet binary specified in service config' > /opt/course/18/reason.txt


Question 20 | Update Kubernetes Version and join cluster

Your coworker said node cluster3-node2 is running an older Kubernetes version and is not even part of the cluster. Update Kubernetes on that node to the exact version that's running on cluster3-controlplane1. Then add this node to the cluster. Use kubeadm for this.

(특정 node upgrade후 cluster join하기)

업데이트 하려는 node 접속

ssh cluster3-node2

현재 버전 확인

kubeadm version

업그레이드 (현재 상태는 node가 cluster에 포함되어 있지않기 때문에 drain이 생략됨 cluster에 포함된 node라면 drain 으로 pod옮기고 진행해야함)

apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet='1.28.2-00' kubectl='1.28.2-00' && \
apt-mark hold kubelet kubectl

재시작

sudo systemctl daemon-reload
sudo systemctl restart kubelet

업데이트 버전확인

kubelet --version

마스터노드 접속

ssh controlplane

join커맨드 검색

kubeadm token create --print-join-command

나오는 결과값 새로 업데이트 된 node에서 붙여넣으면 끝


Question 24 | NetworkPolicy

There was a security incident where an intruder was able to access the whole cluster from a single hacked backend Pod.

To prevent this create a NetworkPolicy called np-backend in Namespace project-snake. It should allow the backend-* Pods only to:

  • connect to db1-* Pods on port 1111

  • connect to db2-* Pods on port 2222

Use the app label of Pods in your policy.

After implementation, connections from backend- Pods to vault- Pods on port 3333 should for example no longer work.

( 특정 pod & port Egress허용하기 )

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: np-backend
  namespace: project-snake
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
    - Egress                    # policy is only about Egress
  egress:
    -                           # first rule
      to:                           # first condition "to"
      - podSelector:
          matchLabels:
            app: db1
      ports:                        # second condition "port"
      - protocol: TCP
        port: 1111
    -                           # second rule
      to:                           # first condition "to"
      - podSelector:
          matchLabels:
            app: db2
      ports:                        # second condition "port"
      - protocol: TCP
        port: 2222

Question 25 | Etcd Snapshot Save and Restore

Make a backup of etcd running on cluster3-controlplane1 and save it on the controlplane node at /tmp/etcd-backup.db.

Then create any kind of Pod in the cluster.

Finally restore the backup, confirm the cluster is still working and that the created Pod is no longer with us.

( etcd 백업 및 리스토어 )

해당 정보 다 찾아서 백업

ETCDCTL_API=3 etcdctl snapshot save /tmp/etcd-backup.db \
--cacert /etc/kubernetes/pki/etcd/ca.crt \
--cert /etc/kubernetes/pki/etcd/server.crt \
--key /etc/kubernetes/pki/etcd/server.key

백업 후 의미없는 파드 아무거나 하나 만들기

kubectl run test --image=nginx

static pod 경로로 접근

cd /etc/kubernetes/manifests/

전부 다 잠깐 옮겨놓기

mv * ..

백업 시작

ETCDCTL_API=3 etcdctl snapshot restore /tmp/etcd-backup.db \
--data-dir /var/lib/etcd-backup \
--cacert /etc/kubernetes/pki/etcd/ca.crt \
--cert /etc/kubernetes/pki/etcd/server.crt \
--key /etc/kubernetes/pki/etcd/server.key

옮겨놓은 etcd.yaml 에서 hostPath를 백업한 경로로 바꿔주고 저장하고 원래 경로로 옮겨놓으면 끝.

k get pods 해보면 백업 후에 만들어놓은 의미없는 파드는 없을 것임.


profile
Web Developer

0개의 댓글