1
Upgrade the current version of kubernetes from 1.28.0 to 1.29.0 exactly using the kubeadm utility. Make sure that the upgrade is carried out one node at a time starting with the controlplane node. To minimize downtime, the deployment gold-nginx should be rescheduled on an alternate node before upgrading each node.
Upgrade controlplane node first and drain node node01 before upgrading it. Pods for gold-nginx should run on the controlplane node subsequently.
Cluster Upgraded?
pods 'gold-nginx' running on controlplane?
$ k get pod -o wide
$ k get node -o wide
$ cd /etc/apt/sources.list.d
$ vi kubernetes.list
/etc/apt/sources.list.d/kubernetes.list
deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /
$ apt update
$ apt-cache madison kubeadm
$ apt-get install kubeadm=1.29.0-1.1
$ kubeadm upgrade plan v1.29.0
$ kubeadm upgrade apply v1.29.0
$ apt-get install kubelet=1.29.0-1.1
$ systemctl daemon-reload
$ systemctl restart kubelet
$ k get node -o wide
$ k drain node01 --ignore-daemonsets
$ k get pod -o wide
$ ssh node01
$ cd /etc/apt/sources.list.d
$ vi kubernetes.list
/etc/apt/sources.list.d/kubernetes.list
deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /
$ apt update
$ apt-cache madison kubeadm
$ apt-get install kubeadm=1.29.0-1.1
$ kubeadm upgrade node
$ apt-get install kubelet=1.29.0-1.1
$ systemctl daemon-reload
$ systemctl restart kubelet
$ logout
$ k get node -o wide
$ k uncordon node01
2
Print the names of all deployments in the admin2406 namespace in the following format:
DEPLOYMENT CONTAINER_IMAGE READY_REPLICAS NAMESPACE
<deployment name> <container image used> <ready replica count> <Namespace>
. The data should be sorted by the increasing order of the deployment name.
Example:
DEPLOYMENT CONTAINER_IMAGE READY_REPLICAS NAMESPACE
deploy0 nginx:alpine 1 admin2406
Write the result to the file /opt/admin2406_data.
$ k get deploy -n admin2406 -o json
$ k get deploy -n admin2406 -o custom-columns=DEPLOYMENT:.metadata.name,\
> CONTAINER_IMAGE:.spec.template.spec.containers[].image,\
> READY_REPLICAS:.status.readyReplicas,\
> NAMESAPCE:.metadata.namespace \
> --sort-by=.metadata.name > /opt/admin2406_data
$ cat /opt/admin2406_data
3
A kubeconfig file called admin.kubeconfig has been created in /root/CKA. There is something wrong with the configuration. Troubleshoot and fix it.
Fix /root/CKA/admin.kubeconfig
$ k config view
$ cd CKA
$ vi admin.kubeconfig
/CKA/admin.kubeconfig
server: https://controlplane:6443
4
Create a new deployment called nginx-deploy, with image nginx:1.16 and 1 replica. Next upgrade the deployment to version 1.17 using rolling update.
Image: nginx:1.16
Task: Upgrade the version of the deployment to 1:17
$ k create deploy nginx-deploy --image=nginx:1.16 --replicas=1
$ k edit deploy nginx-deploy
containers:
- image: nginx:1.17
5
A new deployment called alpha-mysql has been deployed in the alpha namespace. However, the pods are not running. Troubleshoot and fix the issue. The deployment should make use of the persistent volume alpha-pv to be mounted at /var/lib/mysql and should use the environment variable MYSQL_ALLOW_EMPTY_PASSWORD=1 to make use of an empty root password.
Important: Do not alter the persistent volume.
$ k describe pod -n alpha alpha-mysql-5b944d484-tqnwb
persistentvolumeclaim "mysql-alpha-pvc" not found.
$ k get pvc -n alpha
$ vi pvc.yaml
pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-alpha-pvc
namespace: alpha
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: slow
$ k get pod -n alpha
6
Take the backup of ETCD at the location /opt/etcd-backup.db on the controlplane node.
$ k describe pod -n kube-system etcd-controlplane
$ ETCDCTL_API=3 etcdctl --endpoints=https://192.13.92.6:2379 \
> --cacert=/etc/kubernetes/pki/etcd/ca.crt \
> --cert=/etc/kubernetes/pki/etcd/server.crt \
> --key=/etc/kubernetes/pki/etcd/server.key \
> snapshot save /opt/etcd-backup.db
7
Create a pod called secret-1401 in the admin1401 namespace using the busybox image. The container within the pod should be called secret-admin and should sleep for 4800 seconds.
The container should mount a read-only secret volume called secret-volume at the path /etc/secret-volume. The secret being mounted has already been created for you and is called dotfile-secret.
$ k run secret-1401 -n admin1401 --image=busybox --dry-run=client -o yaml > pod.yaml
$ vi pod.yaml
pod.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: secret-1401
name: secret-1401
namespace: admin1401
spec:
volumes:
- name: secret-volume
secret:
secretName: dotfile-secret
containers:
- image: busybox
name: secret-admin
command: ["sleep","4800"]
volumeMounts:
- name: secret-volume
readOnly: true
mountPath: "/etc/secret-volume"
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
$ k apply -f pod.yaml
$ k get pod -n admin1401