Manual Scheduling
1
A pod definition file nginx.yaml is given. Create a pod using the file.
Only create the POD for now. We will inspect its status next.
$ k apply -f nginx.yaml
2
What is the status of the created POD?
$ k get pods
3
Manually schedule the pod on node01.
$ k edit pod nginx
containers:
- name: nginx
- image: nginx
nodeName: node01
$ k delete pod nginx
$ k apply -f /tmp/kubectl-edit-1949610224.yaml
4
Now schedule the same pod on the controlplane node.
$ k edit pod nginx
containers:
- name: nginx
- image: nginx
nodeName: controlplane
$ k delete pod nginx
$ k apply -f /tmp/kubectl-edit-2384681651.yaml
Labels and Selectors
1
We have deployed a number of PODs. They are labelled with tier, env and bu. How many PODs exist in the dev environment (env)?
$ k get pods -l env=dev
2
How many PODs are in the finance business unit (bu)?
$ k get pods -l bu=finance
3
How many objects are in the prod environment including PODs, ReplicaSets and any other objects?
$ k get all -l env=prod
4
Identify the POD which is part of the prod environment, the finance BU and of frontend tier?
$ k get pod -l env=prod,bu=finance,tier=frontend
5
A ReplicaSet definition file is given replicaset-definition-1.yaml. Attempt to create the replicaset; you will encounter an issue with the file. Try to fix it.
replicaset-definition-1.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: replicaset-1
spec:
replicas: 2
selector:
matchLabels:
tier: front-end
template:
metadata:
labels:
tier: front-end
spec:
containers:
- name: nginx
image: nginx
$ k apply -f replicaset-definition-1.yaml
Taints and Tolerations
1
How many nodes exist on the system?
Including the controlplane node.
$ k get nodes -A
2
Do any taints exist on node01 node?
$ k describe node node01 | grep -i taints
3
Create a taint on node01 with key of spray, value of mortein and effect of NoSchedule
$ k taint node node01 spray=mortein:NoSchedule
4
Create a new pod with the nginx image and pod name as mosquito
$ k run mosquito --image=nginx
5
Create another pod named bee with the nginx image, which has a toleration set to the taint mortein
bee.yaml
apiVersion: v1
kind: Pod
metadata:
name: bee
spec:
containers:
- image: nginx
name: bee
tolerations:
- key: spray
value: mortein
effect: NoSchedule
operator: Equal
$ k apply -f bee.yaml
6
Remove the taint on controlplane, which currently has the taint effect of NoSchedule.
$ k describe node controlplane | grep -i taints
Taints: node-role.kubernetes.io/control-plane:NoSchedule
$ k taint node controlplane node-role.kubernetes.io/control-plane:NoSchedule-
Node Affinity
1
How many Labels exist on node node01?
$ k describe node node01 | grep -iA 5 labels
2
Apply a label color=blue to node node01
$ k edit node node01
color: blue
3
Create a new deployment named blue with the nginx image and 3 replicas.
$ k create deploy blue --image=nginx --replicas=3
4
Which nodes can the pods for the blue deployment be placed on?
$ k get pod -o wide
5
Set Node Affinity to the deployment to place the pods on node01 only.
$ k edit deploy blue
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: color
operator: In
values:
- blue
6
Create a new deployment named red with the nginx image and 2 replicas, and ensure it gets placed on the controlplane node only.
Use the label key - node-role.kubernetes.io/control-plane - which is already set on the controlplane node.
red.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: red
spec:
replicas: 2
selector:
matchLabels:
run: nginx
template:
metadata:
labels:
run: nginx
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/control-plane
operator: Exists
$ k apply -f red.yaml
Resource Limits
1
A pod called rabbit is deployed. Identify the CPU requirements set on the Pod
$ k describe pod rabbit | grep -iA 5 request
2
Another pod called elephant has been deployed in the default namespace. It fails to get to a running state. Inspect this pod and identify the Reason why it is not running.
$ k describe pod elephant | grep -i reason
3
The elephant pod runs a process that consumes 15Mi of memory. Increase the limit of the elephant pod to 20Mi.
$ k edit pod elephant
resources:
limits:
memory: 20Mi
$ k delete pod elephant
$ k apply -f /tmp/kubectl-edit-1713705659.yaml
DaemonSets
1
How many DaemonSets are created in the cluster in all namespaces?
$ k get ds -A
2
On how many nodes are the pods scheduled by the DaemonSet kube-proxy?
$ k describe ds kube-proxy -n kube-system
3
What is the image used by the POD deployed by the kube-flannel-ds DaemonSet?
$ k describe ds -n kube-flannel kube-flannel-ds | grep -i image
4
Deploy a DaemonSet for FluentD Logging.
Name: elasticsearch
Namespace: kube-system
Image: registry.k8s.io/fluentd-elasticsearch:1.20
$ kubectl create deployment elasticsearch --image=registry.k8s.io/fluentd-elasticsearch:1.20 -n kube-system --dry-run=client -o yaml > fluentd.yaml
fluentd.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: elasticsearch
name: elasticsearch
namespace: kube-system
spec:
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- image: registry.k8s.io/fluentd-elasticsearch:1.20
name: fluentd-elasticsearch
$ k apply -f fluentd.yaml
Static PODs
1
How many static pods exist in this cluster in all namespaces?
$ k get pods -A
// 뒤에 변수가 없는 Pod
2
What is the path of the directory holding the static pod definition files?
$ ps -aux | grep -i kubelet
--config=/var/lib/kubelet/config.yaml
$ cat config.yaml | grep -i staticpod
staticPodPath: /etc/kubernetes/manifests
3
What is the docker image used to deploy the kube-api server as a static pod?
cat kube-apiserver.yaml | grep -i image
Create a static pod named static-busybox that uses the busybox image and the command sleep 1000
static-busybox.yaml
apiVersion: v1
kind: Pod
metadata:
name: static-busybox
spec:
containers:
- image: busybox
name: busybox
command: ["sleep","1000"]
$ k apply -f static-busybox.yaml
We just created a new static pod named static-greenbox. Find it and delete it.
$ k get pods -o wide
static-greenbox-node01 1/1 Running 0 21s 10.244.1.4 node01 <none> <none>
$ ssh node01
$ cat /var/lib/kubelet/config.yaml | grep -i staticpod
staticPodPath: /etc/just-to-mess-with-you
$ cd /etc/just-to-mess-with-you
$ rm -rf rm -rf greenbox.yaml
Multiple Schedulers
1
What is the image used to deploy the kubernetes scheduler?
$ k describe pod -n kube-system kube-scheduler-controlplane | grep -i image
2
Let's create a configmap that the new scheduler will employ using the concept of ConfigMap as a volume.
We have already given a configMap definition file called my-scheduler-configmap.yaml at /root/ path that will create a configmap with name my-scheduler-config using the content of file /root/my-scheduler-config.yaml.
$ k apply -f my-scheduler-configmap.yaml
3
Deploy an additional scheduler to the cluster following the given specification.
Use the manifest file provided at /root/my-scheduler.yaml. Use the same image as used by the default kubernetes scheduler.
my-scheduler.yaml
image: registry.k8s.io/kube-scheduler:v1.27.0
$ k apply -f my-scheduler.yaml
4
A POD definition file is given. Use it to create a POD with the new custom scheduler.
File is located at /root/nginx-pod.yaml
nginx-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
schedulerName: my-scheduler
containers:
- image: nginx
name: nginx
$ k apply -f nginx-pod.yaml