1

You have access to multiple clusters from your main terminal through kubectl contexts. Write all those context names into /opt/course/1/contexts.

Next write a command to display the current context into /opt/course/1/context_default_kubectl.sh, the command should use kubectl.

Finally write a second command doing the same thing into /opt/course/1/context_default_no_kubectl.sh, but without the use of kubectl.
$ k config get-contexts -o name > /opt/course/1/contexts
$ echo "kubectl config current-context" > /opt/course/1/context_default_kubectl.sh
$ echo "cat ~/.kube/config | grep -i current" > /opt/course/1/context_default_no_kubectl.sh

2

Use context: kubectl config use-context k8s-c1-H

Create a single Pod of image httpd:2.4.41-alpine in Namespace default. 
The Pod should be named pod1 and the container should be named pod1-container. 
This Pod should only be scheduled on controlplane nodes.
Do not add new labels to any nodes.
$ kubectl config use-context k8s-c1-H

$ k get node
$ k describe node cluster1-controlplane1 | grep -iA 5 taint
$ k run pod1 --image=httpd:2.4.41-alpine --dry-run=client -o yaml > pod1.yaml
$ vi pod1.yaml
pod1.yaml

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod1
  name: pod1
spec:
  containers:
  - image: httpd:2.4.41-alpine
    name: pod1-container                       
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
  tolerations:                                 
  - effect: NoSchedule                         
    key: node-role.kubernetes.io/control-plane
  nodeSelector:                                
    node-role.kubernetes.io/control-plane: ""

$ k apply -f pod1.yaml

3

Use context: kubectl config use-context k8s-c1-H

There are two Pods named o3db-* in Namespace project-c13.
C13 management asked you to scale the Pods down to one replica to save resources.
$ kubectl config use-context k8s-c1-H

$ k get pod -n project-c13
$ k get deploy -n project-c13
$ k get statefulset -n project-c13

$ k scale statefulset -n project-c13 o3db --replicas=1

4

Use context: kubectl config use-context k8s-c1-H

Do the following in Namespace default. 
Create a single Pod named ready-if-service-ready of image nginx:1.16.1-alpine. 
Configure a LivenessProbe which simply executes command true. 
Also configure a ReadinessProbe which does check if the url http://service-am-i-ready:80 is reachable, you can use wget -T2 -O- http://service-am-i-ready:80 for this.
Start the Pod and confirm it isn't ready because of the ReadinessProbe.

Create a second Pod named am-i-ready of image nginx:1.16.1-alpine with label id: cross-server-ready.
The already existing Service service-am-i-ready should now have that second Pod as endpoint.

Now the first Pod should be in ready state, confirm that.
$ kubectl config use-context k8s-c1-H

$ k run ready-if-service-ready --image=nginx:1.16.1-alpine --dry-run=client -o yaml > ready-if-service-ready-pod.yaml
$ vi ready-if-service-ready-pod.yaml
ready-if-service-ready-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: ready-if-service-ready
  name: ready-if-service-ready
spec:
  containers:
  - image: nginx:1.16.1-alpine
    name: ready-if-service-ready
    resources: {}
    livenessProbe:                                      
      exec:
        command:
        - 'true'
    readinessProbe:
      exec:
        command:
        - sh
        - -c
        - 'wget -T2 -O- http://service-am-i-ready:80' 

$ k apply -f ready-if-service-ready-pod.yaml

$ k run am-i-ready --image=nginx:1.16.1-alpine --labels="id=cross-server-ready"

5

Use context: kubectl config use-context k8s-c1-H

There are various Pods in all namespaces. 

Write a command into /opt/course/5/find_pods.sh which lists all Pods sorted by their AGE (metadata.creationTimestamp).

Write a second command into /opt/course/5/find_pods_uid.sh which lists all Pods sorted by field metadata.uid. Use kubectl sorting for both commands.
$ kubectl config use-context k8s-c1-H

$ echo "kubectl get pod -A --sort-by=.metadata.creationTimestamp" > /opt/course/5/find_pods.sh
$ echo "kubectl get pod -A --sort-by=.metadata.uid" > /opt/course/5/find_pods_uid.sh

6

Use context: kubectl config use-context k8s-c1-H

Create a new PersistentVolume named safari-pv. 
It should have a capacity of 2Gi, accessMode ReadWriteOnce, hostPath /Volumes/Data and no storageClassName defined.

Next create a new PersistentVolumeClaim in Namespace project-tiger named safari-pvc.
It should request 2Gi storage, accessMode ReadWriteOnce and should not define a storageClassName.
The PVC should bound to the PV correctly.

Finally create a new Deployment safari in Namespace project-tiger which mounts that volume at /tmp/safari-data.
The Pods of that Deployment should be of image httpd:2.4.41-alpine.
$ kubectl config use-context k8s-c1-H

$ vi safari-pv.yaml
safari-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
 name: safari-pv
spec:
 capacity:
  storage: 2Gi
 accessModes:
  - ReadWriteOnce
 hostPath:
  path: "/Volumes/Data"
  
$ k apply -f safari-pv.yaml

$ vi safari-pvc.yaml
safari-pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: safari-pvc
  namespace: project-tiger
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
     storage: 2Gi
$ k apply -f safari-pvc.yaml

$ k create deploy safari -n project-tiger --image=httpd:2.4.41-alpine --dry-run=client -o yaml > safari.yaml
$ vi safari.yaml
safari.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: safari
  name: safari
  namespace: project-tiger
spec:
  replicas: 1
  selector:
    matchLabels:
      app: safari
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: safari
    spec:                  
      containers:
      - image: httpd:2.4.41-alpine
        name: container
        volumeMounts:                               
        - name: safari-storage
          mountPath: /tmp/safari-data
      volumes:                                      
      - name: safari-storage
        persistentVolumeClaim:                      
          claimName: safari-pvc     
          
$ k apply -f safari.yaml

7

Use context: kubectl config use-context k8s-c1-H

The metrics-server has been installed in the cluster.
Your college would like to know the kubectl commands to:

show Nodes resource usage
show Pods and their containers resource usage

Please write the commands into /opt/course/7/node.sh and /opt/course/7/pod.sh.
$ kubectl config use-context k8s-c1-H

$ echo "kubectl top node" > /opt/course/7/node.sh
$ echo "kubectl top pod --containers=true" > /opt/course/7/pod.sh

8

Use context: kubectl config use-context k8s-c1-H

Ssh into the controlplane node with ssh cluster1-controlplane1. 
Check how the controlplane components kubelet, kube-apiserver, kube-scheduler, kube-controller-manager and etcd are started/installed on the controlplane node. 
Also find out the name of the DNS application and how it's started/installed on the controlplane node.

Write your findings into file /opt/course/8/controlplane-components.txt. 

The file should be structured like:

# /opt/course/8/controlplane-components.txt
kubelet: [TYPE]
kube-apiserver: [TYPE]
kube-scheduler: [TYPE]
kube-controller-manager: [TYPE]
etcd: [TYPE]
dns: [TYPE] [NAME]

Choices of [TYPE] are: not-installed, process, static-pod, pod
$ kubectl config use-context k8s-c1-H

$ ssh cluster1-controlplane1

$ cd /usr/lib/systemd && ls | grep -i kube
$ cd /etc/kubernetes/manifests && ls
$ k get pod -n kube-system

kubelet: process
kube-apiserver: static-pod
kube-scheduler: static-pod
kube-controller-manager: static-pod
etcd: static-pod
dns: pod coredns

9

Use context: kubectl config use-context k8s-c2-AC

Ssh into the controlplane node with ssh cluster2-controlplane1.
Temporarily stop the kube-scheduler, this means in a way that you can start it again afterwards.

Create a single Pod named manual-schedule of image httpd:2.4-alpine, confirm it's created but not scheduled on any node.

Now you're the scheduler and have all its power, manually schedule that Pod on node cluster2-controlplane1.
Make sure it's running.

Start the kube-scheduler again and confirm it's running correctly by creating a second Pod named manual-schedule2 of image httpd:2.4-alpine and check if it's running on cluster2-node1.
$ kubectl config use-context k8s-c2-AC

$ ssh cluster2-controlplane1
$ cd /etc/kubernetes/manifests
$ mv kube-scheduler.yaml ..
$ k get pod -n kube-system | grep schedule
$ logout

$ k run manual-schedule --image=httpd:2.4-alpine
$ k get pod manual-schedule -o yaml > manual-schedule-pod.yaml
$ vi manual-schedule-pod.yaml
manual-schedule-pod.yaml

spec:
  nodeName: cluster2-controlplane1 

$ k delete pod manual-schedule --force
$ k apply -f manual-schedule-pod.yaml

$ ssh cluster2-controlplane1
$ cd /etc/kubernetes/manifests
$ mv ../kube-scheduler.yaml .
$ logout

$ k run manual-schedule2 --image=httpd:2.4-alpine
$ k get pod -o wide

10

Use context: kubectl config use-context k8s-c1-H

Create a new ServiceAccount processor in Namespace project-hamster. 
Create a Role and RoleBinding, both named processor as well. 
These should allow the new SA to only create Secrets and ConfigMaps in that Namespace.
$ kubectl config use-context k8s-c1-H

$ k create sa processor -n project-hamster
$ k create role processor -n project-hamster --verb=create --resource=secret,cm
$ k create rolebinding processor -n project-hamster --role processor --serviceaccount project-hamster:processor

$ k -n project-hamster auth can-i create secret --as system:serviceaccount:project-hamster:processor
$ k -n project-hamster auth can-i create configmap --as system:serviceaccount:project-hamster:processor
$ k -n project-hamster auth can-i create pod --as system:serviceaccount:project-hamster:processor

11

Use context: kubectl config use-context k8s-c1-H

Use Namespace project-tiger for the following. 
Create a DaemonSet named ds-important with image httpd:2.4-alpine and labels id=ds-important and uuid=18426a0b-5f59-4e10-923f-c0e078e82462. 
The Pods it creates should request 10 millicore cpu and 10 mebibyte memory. 
The Pods of that DaemonSet should run on all nodes, also controlplanes.
$ kubectl config use-context k8s-c1-H

$ k create deployment ds-important -n project-tiger --image=httpd:2.4-alpine --dry-run=client -o yaml > ds-important.yaml
$ vi ds-important.yaml
ds-important.yaml

apiVersion: apps/v1
kind: DaemonSet                                     
metadata:
  creationTimestamp: null
  labels:                                           
    id: ds-important                                
    uuid: 18426a0b-5f59-4e10-923f-c0e078e82462      
  name: ds-important
  namespace: project-tiger                          
spec:
  selector:
    matchLabels:
      id: ds-important                              
      uuid: 18426a0b-5f59-4e10-923f-c0e078e82462    
  template:
    metadata:
      creationTimestamp: null
      labels:
        id: ds-important                            
        uuid: 18426a0b-5f59-4e10-923f-c0e078e82462  
    spec:
      containers:
      - image: httpd:2.4-alpine
        name: ds-important
        resources:
          requests:                                 
            cpu: 10m                                
            memory: 10Mi                            
      tolerations:                                  
      - effect: NoSchedule                          
        key: node-role.kubernetes.io/control-plane
        
$ k apply -f ds-important.yaml

12

Use context: kubectl config use-context k8s-c1-H

Use Namespace project-tiger for the following. 
Create a Deployment named deploy-important with label id=very-important (the Pods should also have this label) and 3 replicas. 
It should contain two containers, the first named container1 with image nginx:1.17.6-alpine and the second one named container2 with image google/pause.

There should be only ever one Pod of that Deployment running on one worker node. We have two worker nodes: cluster1-node1 and cluster1-node2. 
Because the Deployment has three replicas the result should be that on both nodes one Pod is running. 
The third Pod won't be scheduled, unless a new worker node will be added. 
Use topologyKey: kubernetes.io/hostname for this.

In a way we kind of simulate the behaviour of a DaemonSet here, but using a Deployment and a fixed number of replicas.
$ kubectl config use-context k8s-c1-H

$ k create deployment deploy-important -n project-tiger --image=nginx:1.17.6-alpine --dry-run=client -o yaml > deploy-important.yaml
$ vi deploy-important.yaml
deploy-important.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    id: very-important                  
  name: deploy-important
  namespace: project-tiger              
spec:
  replicas: 3                           
  selector:
    matchLabels:
      id: very-important                
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        id: very-important              
    spec:
      containers:
      - image: nginx:1.17.6-alpine
        name: container1                
      - image: google/pause             
        name: container2                
      affinity:                                           
        podAntiAffinity:                                 
          requiredDuringSchedulingIgnoredDuringExecution: 
          - labelSelector:                               
              matchExpressions:                           
              - key: id                                   
                operator: In                             
                values:                                   
                - very-important                         
            topologyKey: kubernetes.io/hostname    
            
$ k apply -f deploy-important.yaml
$ k get deploy -n projet-tiger
$ k get pod -n project-tiget -o wide

13

Use context: kubectl config use-context k8s-c1-H

Create a Pod named multi-container-playground in Namespace default with three containers, named c1, c2 and c3. 
There should be a volume attached to that Pod and mounted into every container, but the volume shouldn't be persisted or shared with other Pods.

Container c1 should be of image nginx:1.17.6-alpine and have the name of the node where its Pod is running available as environment variable MY_NODE_NAME.

Container c2 should be of image busybox:1.31.1 and write the output of the date command every second in the shared volume into file date.log. 
You can use while true; do date >> /your/vol/path/date.log; sleep 1; done for this.

Container c3 should be of image busybox:1.31.1 and constantly send the content of file date.log from the shared volume to stdout. 
You can use tail -f /your/vol/path/date.log for this.

Check the logs of container c3 to confirm correct setup.
$ kubectl config use-context k8s-c1-H

$ k run multi-container-playground --image=nginx:1.17.6-alpine --dry-run=client -o yaml > multi-container-playground.yaml
$ vi multi-container-playground.yaml
multi-container-playground.yaml

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: multi-container-playground
  name: multi-container-playground
spec:
  containers:
  - image: nginx:1.17.6-alpine
    name: c1                                             
    env:                                                 
    - name: MY_NODE_NAME                                 
      valueFrom:                                         
        fieldRef:                                         
          fieldPath: spec.nodeName                       
    volumeMounts:                                         
    - name: vol                                           
      mountPath: /vol                                     
  - image: busybox:1.31.1                                 
    name: c2                                             
    command: ["sh", "-c", "while true; do date >> /vol/date.log; sleep 1; done"]  # add
    volumeMounts:                                         
    - name: vol                                           
      mountPath: /vol                                     
  - image: busybox:1.31.1                                 
    name: c3                                             
    command: ["sh", "-c", "tail -f /vol/date.log"]       
    volumeMounts:                                         
    - name: vol                                           
      mountPath: /vol                                     
  volumes:                                               
    - name: vol                                           
      emptyDir: {}                                       

$ k apply -f multi-container-playground.yaml
$ k get pod multi-container-playground
$ k exec multi-container-playground -c c1 -- env | grep MY
$ k logs multi-container-playground -c c3

14

Use context: kubectl config use-context k8s-c1-H

You're ask to find out following information about the cluster k8s-c1-H:

How many controlplane nodes are available?
How many worker nodes are available?
What is the Service CIDR?
Which Networking (or CNI Plugin) is configured and where is its config file?
Which suffix will static pods have that run on cluster1-node1?
Write your answers into file /opt/course/14/cluster-info, structured like this:

# /opt/course/14/cluster-info
1: [ANSWER]
2: [ANSWER]
3: [ANSWER]
4: [ANSWER]
5: [ANSWER]
$ kubectl config use-context k8s-c1-H

$ k get node

$ ssh cluster1-controlplane1
$ cd /etc/kubernetes/manifests
$ cat kube-apiserver.yaml | grep range

$ cd /etc/cni/net.d
$ cat 10-weave.conflist

# /opt/course/14/cluster-info
1: 1
2: 2
3: 10.96.0.0/12
4: Weave, /etc/cni/net.d/10-weave.conflist
5: -cluster1-node1

15

Use context: kubectl config use-context k8s-c2-AC

Write a command into /opt/course/15/cluster_events.sh which shows the latest events in the whole cluster, ordered by time (metadata.creationTimestamp). 
Use kubectl for it.

Now delete the kube-proxy Pod running on node cluster2-node1 and write the events this caused into /opt/course/15/pod_kill.log.

Finally kill the containerd container of the kube-proxy Pod on node cluster2-node1 and write the events into /opt/course/15/container_kill.log.

Do you notice differences in the events both actions caused?
$ kubectl config use-context k8s-c2-AC

$ echo "kubectl get events -A --sort-by=.metadata.creationTimestamp" > /opt/course/15/cluster_events.sh

$ k delete pod -n kube-system kube-proxy-z43af
$ sh /opt/course/15/cluster_events.sh > /opt/course/15/pod_kill.log

$ ssh cluster2-node1
$ crictl ps | grep kube-proxy
$ crictl rm 1e020b43c4423
$ crictl ps | grep kube-proxy
$ sh /opt/course/15/cluster_events.sh > /opt/course/15/container_kill.log

16

Use context: kubectl config use-context k8s-c1-H

Write the names of all namespaced Kubernetes resources (like Pod, Secret, ConfigMap...) into /opt/course/16/resources.txt.

Find the project-* Namespace with the highest number of Roles defined in it and write its name and amount of Roles into /opt/course/16/crowded-namespace.txt.
$ kubectl config use-context k8s-c1-H

$ k api-resources --namespaced -o name > /opt/course/16/resources.txt

$ k get ns
$ k get role -n project-c13 --no-headers | wc -l
$ k get role -n project-c14 --no-headers | wc -l
$ k get role -n project-hamster --no-headers | wc -l
$ k get role -n project-snake --no-headers | wc -l
$ k get role -n project-tiger --no-headers | wc -l

# /opt/course/16/crowded-namespace.txt
project-c14 300 
profile
Cloud Engineer / DevOps Engineer

0개의 댓글