1

Create a new service account with the name pvviewer. Grant this Service account access to list all PersistentVolumes in the cluster by creating an appropriate cluster role called pvviewer-role and ClusterRoleBinding called pvviewer-role-binding.
Next, create a pod called pvviewer with the image: redis and serviceAccount: pvviewer in the default namespace.


ServiceAccount: pvviewer
ClusterRole: pvviewer-role
ClusterRoleBinding: pvviewer-role-binding
Pod: pvviewer
Pod configured to use ServiceAccount pvviewer ?
$ k create sa pvviewer
$ k create clusterrole pvviewer-role --verb=list --resource=pv
$ k create clusterrolebinding pvviewer-role-binding --clusterrole=pvviewer-role --serviceaccount=default:pvviewer
$ k run pvviewer --image=redis --dry-run=client -o yaml > pvviewer-pod.yaml
$ vi pvviewer-pod.yaml
pvviewer-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pvviewer
  name: pvviewer
spec:
  serviceAccountName: pvviewer
  containers:
  - image: redis
    name: pvviewer
    
$ k apply -f pvviewer-pod.yaml 

2

List the InternalIP of all nodes of the cluster. Save the result to a file /root/CKA/node_ips.

Answer should be in the format: InternalIP of controlplane<space>InternalIP of node01 (in a single line)
$ k get node -o json
$ k get node -o jsonpath='{$.items[*].status.addresses[?(@.type=="InternalIP")].address}' > /root/CKA/node_ips
$ cat /root/CKA/node_ips

3

Create a pod called multi-pod with two containers.
Container 1: name: alpha, image: nginx
Container 2: name: beta, image: busybox, command: sleep 4800

Environment Variables:
container 1:
name: alpha

Container 2:
name: beta

Pod Name: multi-pod
Container 1: alpha
Container 2: beta
Container beta commands set correctly?
Container 1 Environment Value Set
Container 2 Environment Value Set
$ k run multi-pod --image=nginx --dry-run=client -o yaml > multi-pod.yaml
$ vi multi-pod.yaml
multi-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: multi-pod
  name: multi-pod
spec:
  containers:
  - image: nginx
    name: alpha
    env:
    - name: name
      value: alpha
  - image: busybox
    name: beta
    command: ["sleep","4800"]
    env:
    - name: name
      value: beta
      
$ k apply -f multi-pod.yaml 
$ k describe pod multi-pod

4

Create a Pod called non-root-pod , image: redis:alpine

runAsUser: 1000
fsGroup: 2000

Pod non-root-pod fsGroup configured
Pod non-root-pod runAsUser configured
$ k run non-root-pod --image=redis:alpine --dry-run=client -o yaml > non-root-rod.yaml
$ vi non-root-rod.yaml 
non-root-rod.yaml 

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: non-root-pod
  name: non-root-pod
spec:
  securityContext:
    runAsUser: 1000
    fsGroup: 2000
  containers:
  - image: redis:alpine
    name: non-root-pod
    
$ k apply -f non-root-rod.yaml 
$ k describe pod non-root-pod 

5

We have deployed a new pod called np-test-1 and a service called np-test-service. Incoming connections to this service are not working. Troubleshoot and fix it.
Create NetworkPolicy, by the name ingress-to-nptest that allows incoming connections to the service over port 80.


Important: Don't delete any current objects deployed.
Important: Don't Alter Existing Objects!
NetworkPolicy: Applied to All sources (Incoming traffic from all pods)?
NetWorkPolicy: Correct Port?
NetWorkPolicy: Applied to correct Pod?
$ k describe pod np-test-1
$ vi ingress-to-nptest.yaml
ingress-to-nptest.yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: ingress-to-nptest
spec:
  podSelector:
    matchLabels:
      run: np-test-1
  ingress:
  - ports:
    - protocol: TCP
      port: 80
  policyTypes:
  - Ingress
  
$ k apply -f ingress-to-nptest.yaml

6

Taint the worker node node01 to be Unschedulable. Once done, create a pod called dev-redis, image redis:alpine, to ensure workloads are not scheduled to this worker node. Finally, create a new pod called prod-redis and image: redis:alpine with toleration to be scheduled on node01.

key: env_type, value: production, operator: Equal and effect: NoSchedule

Key = env_type
Value = production
Effect = NoSchedule
pod 'dev-redis' (no tolerations) is not scheduled on node01?
Create a pod 'prod-redis' to run on node01
$ k taint node node01 env_type=production:NoSchedule
$ k run dev-redis --image=redis:alpine
$ k run prod-redis --image=redis:alpine --dry-run=client -o yaml > prod-redis.yaml
$ vi prod-redis.yaml
prod-redis.yaml

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: prod-redis
  name: prod-redis
spec:
  containers:
  - image: redis:alpine
    name: prod-redis
  tolerations:
  - key: env_type
    value: production
    effect: NoSchedule
    
$ k apply -f prod-redis.yaml
$ k get pod -o wide

7

Create a pod called hr-pod in hr namespace belonging to the production environment and frontend tier .
image: redis:alpine

Use appropriate labels and create all the required objects if it does not exist in the system already.
$ k create ns hr
$ k run hr-pod -n hr --image=redis:alpine -l environment=production,tier=frontend
$ k describe pod -n hr hr-pod

8

A kubeconfig file called super.kubeconfig has been created under /root/CKA. There is something wrong with the configuration. Troubleshoot and fix it.
$ cd CKA
$ cat super.kubeconfig
$ k config view
$ vi super.kubeconfig
super.kubeconfig

server: https://controlplane:6443

9

We have created a new deployment called nginx-deploy. scale the deployment to 3 replicas. Has the replica's increased? Troubleshoot the issue and fix it.
$ k scale deployment nginx-deploy --replicas=3
$ k get deploy
$ k get all -A

pod/kube-contro1ler-manager-controlplane   0/1     ErrImagePull

$ cd /etc/kubernetes/manifests
$ cat kube-controller-manager.yaml | grep -n contro1ler

6:    component: kube-contro1ler-manager
8:  name: kube-contro1ler-manager
13:    - kube-contro1ler-manager
31:    image: registry.k8s.io/kube-contro1ler-manager:v1.29.0
43:    name: kube-contro1ler-manager

$ vi kube-controller-manager.yaml
$ k get pod -n kube-system
$ k get deploy
profile
Junior DevOps Engineer

0개의 댓글