Udemy Labs - Certified Kubernetes Application Developer - Lab: Lightning Lab - 1 문제 풀이

hyereen·2025년 1월 30일

Kubernetes

목록 보기
33/53

1 / 5
Weight: 20
Create a Persistent Volume called log-volume. It should make use of a storage class name manual. It should use RWX as the access mode and have a size of 1Gi. The volume should use the hostPath /opt/volume/nginx
Next, create a PVC called log-claim requesting a minimum of 200Mi of storage. This PVC should bind to log-volume.
Mount this in a pod called logger at the location /var/www/nginx. This pod should use the image nginx:alpine.
log-volume created with correct parameters?

정답

controlplane ~ ➜  cat log-volume.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: log-volume
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  storageClassName: manual
  hostPath:
    path: "/opt/volume/nginx"

controlplane ~ ➜  cat log-claim.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: log-claim
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 200Mi


controlplane ~ ➜  cat logger.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: logger
spec:
  volumes:
    - name: logger-volume
      persistentVolumeClaim:
        claimName: log-claim
  containers:
    - name: logger
      image: nginx:alpine
      volumeMounts:
        - mountPath: "/var/www/nginx"
          name: logger-volume

controlplane ~ ➜  vi log-volume.yaml

controlplane ~ ➜  k apply -f log-volume.yaml 
persistentvolume/log-volume created

controlplane ~ ➜  vi log-claim.yaml

controlplane ~ ➜  k apply -f log-claim.yaml 
persistentvolumeclaim/log-claim created

controlplane ~ ➜  k get pv
NAME         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
log-volume   1Gi        RWX            Retain           Bound    default/log-claim   manual         <unset>                          66s

controlplane ~ ➜  k get pvc
NAME        STATUS   VOLUME       CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
log-claim   Bound    log-volume   1Gi        RWX            manual         <unset>                 7s

controlplane ~ ➜  vi logger.yaml

controlplane ~ ➜  k apply -f logger.yaml 
pod/logger created

controlplane ~ ➜  k get po
NAME           READY   STATUS    RESTARTS   AGE
logger         1/1     Running   0          54s
secure-pod     1/1     Running   0          36s
webapp-color   1/1     Running   0          11m

2 / 5
Weight: 20
We have deployed a new pod called secure-pod and a service called secure-service. Incoming or Outgoing connections to this pod are not working.
Troubleshoot why this is happening.
Make sure that incoming connection from the pod webapp-color are successful.
Important: Don't delete any current objects deployed.
Important: Don't Alter Existing Objects!
Connectivity working?

정답
1. 네트워크 정책 확인

controlplane ~ ✖ kubectl get networkpolicy --all-namespaces
NAMESPACE   NAME                  POD-SELECTOR     AGE
default     default-deny          <none>           14m
  1. 네트워크 정책 추가
  • 기본적으로 연결이 차단된 이유는 default-deny 네트워크 정책 때문이므로, 이 차단을 해제할 수 있도록 secure-pod에 대해 적절한 네트워크 정책을 추가
  • test-network-policy라는 네트워크 정책을 만들어서 secure-pod로의 연결을 허용함
  • 이 정책은 webapp-color라는 파드에서 secure-pod로 오는 TCP 80번 포트의 연결만 허용하도록 구성 -> 꼭 80으로 해야함 !!
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: test-network-policy
  namespace: default
spec:
  podSelector:
    matchLabels:
      run: secure-pod  # 이 네트워크 정책이 적용될 대상 pod를 선택
  policyTypes:
  - Ingress  # Ingress (들어오는) 연결만 허용
  ingress:
  - from:
    - podSelector:
        matchLabels:
          name: webapp-color  # webapp-color에서 오는 연결만 허용
    ports:
    - protocol: TCP
      port: 80  # 포트 80번만 허용
controlplane ~ ✖ kubectl get networkpolicy --all-namespaces
NAMESPACE   NAME                  POD-SELECTOR     AGE
default     default-deny          <none>           14m
default     test-network-policy   run=secure-pod   2m9s
  1. 연결 확인
  • nc: nc는 Netcat의 약자로, 네트워크 연결을 테스트하고, 포트 스캔을 하고, 데이터를 전송하는 데 사용하는 유틸리티 -> TCP/IP 네트워크를 디버깅할 때 많이 사용
  • v (verbose mode): -v는 verbose 모드로, 명령 실행 중에 더 많은 정보를 출력 -> 이 옵션을 사용하면 연결 상태나 시도하는 과정에서 발생하는 상세한 정보 출력
  • z (zero I/O mode): -z는 포트 스캔 모드 -> 이 옵션을 사용하면 Netcat은 실제로 데이터를 송수신하지 않고, 오직 지정한 포트가 열려 있는지 확인만 함 -> 즉, 네트워크 연결을 시도하고, 포트가 열려 있는지 확인하는 용도로 사용
  • w 5 (timeout): -w는 timeout (타임아웃 시간)을 설정합니다. 이 경우, 연결을 시도할 때 5초 동안만 기다리며, 연결이 되지 않으면 타임아웃됨
controlplane ~ ➜  kubectl exec -it webapp-color -- sh
/opt # nc -v -z -w 5 secure-service 80
secure-service (172.20.62.40:80) open
/opt # kubectl get networkpolicy --all-namespaces
sh: kubectl: not found
/opt # ^C
/opt # exit
command terminated with exit code 130

3 / 5
Weight: 20
Create a pod called time-check in the dvl1987 namespace. This pod should run a container called time-check that uses the busybox image.
Create a config map called time-config with the data TIME_FREQ=10 in the same namespace.
The time-check container should run the command: while true; do date; sleep $TIME_FREQ;done and write the result to the location /opt/time/time-check.log.
The path /opt/time on the pod should mount a volume that lasts the lifetime of this pod.
Pod time-check configured correctly?

참고

정답
1. 네임스페이스 생성

kubectl create namespace dvl1987
  1. ConfigMap 생성
  • 네임스페이스 주의!
kind: ConfigMap
metadata:
  name: time-config
  namespace: dvl1987
data:
  TIME_FREQ: "10"
  1. time-check 파드 생성
  • 네임스페이스 주의
  • spec.volumes.emptyDir: {} 주의!
apiVersion: v1
kind: Pod
metadata:
  name: time-check
  namespace: dvl1987
spec:
  volumes:
  - name: log-volume
    emptyDir: {}  # 이 볼륨은 파드의 생애 주기 동안 존재하며, 파드가 종료되면 사라집니다.
  containers:
  - name: time-check
    image: busybox
    env:
    - name: TIME_FREQ # 이 이름과
      valueFrom:
        configMapKeyRef:
          name: time-config  
          key: TIME_FREQ # 이 key의 이름은 time-config에서 정의한 이름을 가져와야 함
    volumeMounts:
    - name: log-volume
      mountPath: /opt/time  # /opt/time에 볼륨 마운트
    command:
    - "/bin/sh"
    - "-c"
    - "while true; do date; sleep $TIME_FREQ; done > /opt/time/time-check.log" # 여기서 사용되는  $TIME_FREQ 이름이 같아야 함 

4 / 5
Weight: 20
Create a new deployment called nginx-deploy, with one single container called nginx, image nginx:1.16 and 4 replicas.
The deployment should use RollingUpdate strategy with maxSurge=1, and maxUnavailable=2.
Next upgrade the deployment to version 1.17.
Finally, once all pods are updated, undo the update and go back to the previous version.
Deployment created correctly?
Was the deployment created with nginx:1.16?
Was it upgraded to 1.17?
Deployment rolled back to 1.16?

정답

controlplane ~ ➜  cat nginx-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
 name: nginx-deploy
 labels:
   app: nginx
spec:
 replicas: 4
 selector:
   matchLabels:
     app: nginx
 template:
   metadata:
     labels:
       app: nginx
   spec:
     containers:
     - name: nginx
       image: nginx:1.16
       ports:
       - containerPort: 80
 strategy:
   type: RollingUpdate
   rollingUpdate:
     maxSurge: 1
     maxUnavailable: 2
controlplane ~ ➜  vi nginx-deploy.yaml

controlplane ~ ➜  k apply -f nginx-deploy.yaml 
deployment.apps/nginx-deploy created
controlplane ~ ➜  k set image deployments/nginx-deploy nginx=nginx:1.17
deployment.apps/nginx-deploy image updated

controlplane ~ ➜  k get pod
NAME                            READY   STATUS              RESTARTS   AGE
logger                          1/1     Running             0          31m
nginx-deploy-678c6b9b69-bpfrm   0/1     ContainerCreating   0          3s
nginx-deploy-678c6b9b69-j9jcj   0/1     ContainerCreating   0          3s
nginx-deploy-678c6b9b69-jbk8s   0/1     ContainerCreating   0          3s
nginx-deploy-fb4cbd588-pl56m    1/1     Running             0          3m14s
nginx-deploy-fb4cbd588-wtgwj    1/1     Running             0          3m14s
secure-pod                      1/1     Running             0          31m
webapp-color                    1/1     Running             0          42m

controlplane ~ ➜  k get pod
NAME                            READY   STATUS    RESTARTS   AGE
logger                          1/1     Running   0          32m
nginx-deploy-678c6b9b69-bpfrm   1/1     Running   0          14s
nginx-deploy-678c6b9b69-j9jcj   1/1     Running   0          14s
nginx-deploy-678c6b9b69-jbk8s   1/1     Running   0          14s
nginx-deploy-678c6b9b69-zdvc9   1/1     Running   0          4s
secure-pod                      1/1     Running   0          31m
webapp-color                    1/1     Running   0          42m

controlplane ~ ➜  k describe deployments.apps/nginx-deploy 
Name:                   nginx-deploy
Namespace:              default
CreationTimestamp:      Thu, 30 Jan 2025 06:23:32 +0000
Labels:                 app=nginx
Annotations:            deployment.kubernetes.io/revision: 2
Selector:               app=nginx
Replicas:               4 desired | 4 updated | 4 total | 4 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  2 max unavailable, 1 max surge
Pod Template:
  Labels:  app=nginx
  Containers:
   nginx:
    Image:         nginx:1.17
    Port:          80/TCP
    Host Port:     0/TCP
    Environment:   <none>
    Mounts:        <none>
  Volumes:         <none>
  Node-Selectors:  <none>
  Tolerations:     <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  nginx-deploy-fb4cbd588 (0/0 replicas created)
NewReplicaSet:   nginx-deploy-678c6b9b69 (4/4 replicas created)
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  3m36s  deployment-controller  Scaled up replica set nginx-deploy-fb4cbd588 to 4
  Normal  ScalingReplicaSet  25s    deployment-controller  Scaled up replica set nginx-deploy-678c6b9b69 to 1
  Normal  ScalingReplicaSet  25s    deployment-controller  Scaled down replica set nginx-deploy-fb4cbd588 to 2 from 4
  Normal  ScalingReplicaSet  25s    deployment-controller  Scaled up replica set nginx-deploy-678c6b9b69 to 3 from 1
  Normal  ScalingReplicaSet  15s    deployment-controller  Scaled down replica set nginx-deploy-fb4cbd588 to 1 from 2
  Normal  ScalingReplicaSet  15s    deployment-controller  Scaled up replica set nginx-deploy-678c6b9b69 to 4 from 3
  Normal  ScalingReplicaSet  15s    deployment-controller  Scaled down replica set nginx-deploy-fb4cbd588 to 0 from 1

controlplane ~ ➜  k rollout undo deployment/nginx-deploy
deployment.apps/nginx-deploy rolled back

controlplane ~ ➜  k describe deployments.apps/nginx-deploy 
Name:                   nginx-deploy
Namespace:              default
CreationTimestamp:      Thu, 30 Jan 2025 06:23:32 +0000
Labels:                 app=nginx
Annotations:            deployment.kubernetes.io/revision: 3
Selector:               app=nginx
Replicas:               4 desired | 3 updated | 5 total | 2 available | 3 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  2 max unavailable, 1 max surge
Pod Template:
  Labels:  app=nginx
  Containers:
   nginx:
    Image:         nginx:1.16
    Port:          80/TCP
    Host Port:     0/TCP
    Environment:   <none>
    Mounts:        <none>
  Volumes:         <none>
  Node-Selectors:  <none>
  Tolerations:     <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    ReplicaSetUpdated
OldReplicaSets:  nginx-deploy-678c6b9b69 (2/2 replicas created)
NewReplicaSet:   nginx-deploy-fb4cbd588 (3/3 replicas created)
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  4m10s  deployment-controller  Scaled up replica set nginx-deploy-fb4cbd588 to 4
  Normal  ScalingReplicaSet  59s    deployment-controller  Scaled up replica set nginx-deploy-678c6b9b69 to 1
  Normal  ScalingReplicaSet  59s    deployment-controller  Scaled down replica set nginx-deploy-fb4cbd588 to 2 from 4
  Normal  ScalingReplicaSet  59s    deployment-controller  Scaled up replica set nginx-deploy-678c6b9b69 to 3 from 1
  Normal  ScalingReplicaSet  49s    deployment-controller  Scaled down replica set nginx-deploy-fb4cbd588 to 1 from 2
  Normal  ScalingReplicaSet  49s    deployment-controller  Scaled up replica set nginx-deploy-678c6b9b69 to 4 from 3
  Normal  ScalingReplicaSet  49s    deployment-controller  Scaled down replica set nginx-deploy-fb4cbd588 to 0 from 1
  Normal  ScalingReplicaSet  3s     deployment-controller  Scaled up replica set nginx-deploy-fb4cbd588 to 1 from 0
  Normal  ScalingReplicaSet  3s     deployment-controller  Scaled down replica set nginx-deploy-678c6b9b69 to 2 from 4
  Normal  ScalingReplicaSet  3s     deployment-controller  (combined from similar events): Scaled up replica set nginx-deploy-fb4cbd588 to 3 from 1

5 / 5
Weight: 20
Create a redis deployment with the following parameters:
Name of the deployment should be redis using the redis:alpine image. It should have exactly 1 replica.
The container should request for .2 CPU. It should use the label app=redis.
It should mount exactly 2 volumes.
a. An Empty directory volume called data at path /redis-master-data.
b. A configmap volume called redis-config at path /redis-master.
c. The container should expose the port 6379.
The configmap has already been created.
Deployment created correctly?

정답

controlplane ~ ➜  cat redis.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
  labels:
    app: redis
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
    spec:
      containers:
      - name: redis
        image: redis:alpine
        ports:
        - containerPort: 6379
        volumeMounts:
          - mountPath: /redis-master-data
            name: data
          - mountPath: /redis-master
            name: redis-config
        resources:
          requests:
            cpu: "200m" # 주의
      volumes:
      - name: data
        emptyDir: {} # 크기 지정하지 말것 !!! 
      - name: redis-config
        configMap:
          name: redis-config

0개의 댓글