1

Take a backup of the etcd cluster and save it to /opt/etcd-backup.db.
$ k describe pod -n kube-system etcd-controlplane
$ ETCDCTL_API=3 etcdctl --endpoints=https://192.13.82.6:2379 \
> --cacert=/etc/kubernetes/pki/etcd/ca.crt \
> --cert=/etc/kubernetes/pki/etcd/server.crt \
> --key=/etc/kubernetes/pki/etcd/server.key \
> snapshot save /opt/etcd-backup.db

2

Create a Pod called redis-storage with image: redis:alpine with a Volume of type emptyDir that lasts for the life of the Pod.

Specs on the below.
Pod named 'redis-storage' created
Pod 'redis-storage' uses Volume type of emptyDir
Pod 'redis-storage' uses volumeMount with mountPath = /data/redis
$ k run redis-storage --image=redis:alpine --dry-run=client -o yaml > redis-pod.yaml
$ vi redis-pod.yaml

redis-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: redis-storage
  name: redis-storage
spec:
  containers:
  - image: redis:alpine
    name: redis-storage
    volumeMounts:
    - mountPath: /data/redis
      name: redis-volume
  volumes:
  - name: redis-volume
    emptyDir:
      sizeLimit: 500Mi
      
$ k apply -f redis-pod.yaml

3

Create a new pod called super-user-pod with image busybox:1.28. Allow the pod to be able to set system_time.

The container should sleep for 4800 seconds.

Pod: super-user-pod
Container Image: busybox:1.28
Is SYS_TIME capability set for the container?
$ k run super-user-pod --image=busybox:1.28 --dry-run=client -o yaml > super-user-pod.yaml
$ vi super-suer-pod.yaml
super-suer-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: super-user-pod
  name: super-user-pod
spec:
  containers:
  - image: busybox:1.28
    name: super-user-pod
    command: ["sleep","4800"]
    securityContext:
      capabilities:
        add: ["SYS_TIME"] 
        
$ k apply -f super-suer-pod.yaml

4

A pod definition file is created at /root/CKA/use-pv.yaml. Make use of this manifest file and mount the persistent volume called pv-1. Ensure the pod is running and the PV is bound.

mountPath: /data
persistentVolumeClaim Name: my-pvc
persistentVolume Claim configured correctly
pod using the correct mountPath
pod using the persistent volume claim?
$ k get pv
$ k get pvc

$ cd CKA
$ vi use-pv.yaml
use.yaml

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: use-pv
  name: use-pv
spec:
  containers:
  - image: nginx
    name: use-pv
    volumeMounts:
      - mountPath: "/data"
        name: usepd
  volumes:
    - name: usepd
      persistentVolumeClaim:
        claimName: my-pvc
        
$ k describe pv pv-1
$ vi my-pvc.yaml
my-pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 10Mi
      
$ k apply -f .
$ k get pod

5

Create a new deployment called nginx-deploy, with image nginx:1.16 and 1 replica. Next upgrade the deployment to version 1.17 using rolling update.

Deployment : nginx-deploy. Image: nginx:1.16
Image: nginx:1.16
Task: Upgrade the version of the deployment to 1:17
Task: Record the changes for the image upgrade
$ k create deploy nginx-deploy --image=nginx:1.16 --replicas=1
$ k set image deploy/nginx-deploy nginx=nginx:1.17 --record

6

Create a new user called john. Grant him access to the cluster. John should have permission to create, list, get, update and delete pods in the development namespace . The private key exists in the location: /root/CKA/john.key and csr at /root/CKA/john.csr.

Important Note: As of kubernetes 1.19, the CertificateSigningRequest object expects a signerName.

CSR: john-developer Status:Approved
Role Name: developer, namespace: development, Resource: Pods
Access: User 'john' has appropriate permissions
$ cat john.csr | base64 -w 0
$ vi john-developer.yaml
john-developer.yaml

apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
  name: john-developer
spec:
  request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZEQ0NBVHdDQVFBd0R6RU5NQXNHQTFVRUF3d0VhbTlvYmpDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRApnZ0VQQURDQ0FRb0NnZ0VCQUxnVmVWTkJQaXkzNVQ2K0haY2F4RFMrYnM4WTNsOUhwaGhWRThia1JQTEh2R1I1CjFLSlJMQWxzenp0S3dDSUtsRzIwSFgxeW5qTjAxNUxpazRya2xCQUd0ZDBjNHcyM0tLWGFDOXhpZTZwWUh4QlkKWnlVMFlWRm01encyQmxnOTlJcjNCWUNTckFZVnVrR25wMElqa29FeTNMWFBML2R1MHZCQ1Zvb0RKK09sSlUzYwo4ZTBCc1RpKzBNZjZmRGVUOEcvanVKZkVQemdyNXJ3ZzF3d3VUcWVEVTE2T1NVeUFnMEREVFgrZG4rSnd2ZUdjCjNFeDc4dXMrZTN1RFk4YllVN1JWaVRKMDAxdnFuK05nQVRBejZ0dXJma1NqVmVKYVBqMkE4ZjRlTWMxRkpZMzEKU0djMUE1ZFJLb3VneFNYY2VPWWNEQmt0WmlsWWpQbEVTelRxb3NNQ0F3RUFBYUFBTUEwR0NTcUdTSWIzRFFFQgpDd1VBQTRJQkFRQjZoZlNOMVFvalltdVd1ZEN2eVJiVjZJTmJhYmRTM0tHVWFMUDlhcTFwNVJIbWNOV1RNOU5jClVMc09XcFNaRElvb1RiNDNsNHVLckNLamJJWnNiT2o3elQrb2YrRkVQL1lCN0hxRUpqRUVVR29ESE1KVWd6U0IKQmlpRmE3VXJGTWVJcm5IS0JYMXN0MHlqR0J4YVErdFp3bkhEYXRYRzVacmU2MkpLMXhubklqeXFWNEFnMElUNApEeDcwWGpPRnhOMDJVRytZZmZwOFJkV1AyWjJvOE81M3NHL0lacE5icmx5Y2dQSENKR3hTSTFTMSsvZHJBZUNCCjdEOVgyRjV4cmJnTHhXelhOdURuNlFzM0xDQWN0Nms4dzBhV2VnUy9TSElHQnRrL254TUhkVERtSWVXZmJIeloKdmpRSjFKejB1T3BVUStYMmRIZjBmT3hJN1UydVZndXcKLS0tLS1FTkQgQ0VSVElGSUNBVEUgUkVRVUVTVC0tLS0tCg==
  signerName: kubernetes.io/kube-apiserver-client
  expirationSeconds: 86400
  usages:
  - client auth
  
$ k apply -f john-developer.yaml
$ k certificate approve john-developer

$ k create role developer -n development --verb=create,list,get,update,delete --resource=pod
$ k create rolebinding devloper-binding-john --role=developer --user=john -n development

$ k auth can-i get pod --as=john --namespace=development
$ k auth can-i get node --as=john --namespace=development

7

Create a nginx pod called nginx-resolver using image nginx, expose it internally with a service called nginx-resolver-service. Test that you are able to look up the service and pod names from within the cluster. Use the image: busybox:1.28 for dns lookup. Record results in /root/CKA/nginx.svc and /root/CKA/nginx.pod

Pod: nginx-resolver created
Service DNS Resolution recorded correctly
Pod DNS resolution recorded correctly
$ k run nginx-resolver --image=nginx
$ k expose pod nginx-resolver --name=nginx-resolver-service --port=80 --target-port=80

$ k run busybox --image=busybox:1.28 --restart=Never --rm -i -- nslookup nginx-resolver.service > /root/CKA/nginx.svc
$ cat nginx.svc
$ k run busybox --image=busybox:1.28 --restart=Never --rm -i -- nslookup 10-244-192-4.default.pod > /root/CKA/nginx.pod
$ cat nginx.pod

8

Create a static pod on node01 called nginx-critical with image nginx and make sure that it is recreated/restarted automatically in case of a failure.

Use /etc/kubernetes/manifests as the Static Pod path for example.
$ k run nginx-critical --image=nginx --dry-run -o yaml > nginx-critical.yaml
$ scp nginx-critical.yaml node01:/etc/kubernetes/manifest/
$ k get pod
$ k describe pod nginx-critical | grep -i node
profile
Cloud Engineer / DevOps Engineer

0개의 댓글