Kubespray - k8s 설치 가이드

문정환·2023년 9월 25일

https://github.com/kubernetes-sigs/kubespray/tree/master/docs

https://github.com/kubernetes-sigs/kubespray


## 인벤토리 구성
cp -rpf inventory/sample/ inventory/mycluster
inventory/mycluster/inventory.ini

## 변수 설정
inventory/mycluster/group_vars

## 플레이북 실행
ansible all -m ping -i inventory/mycluster/inventory.ini
ansible-playbook -i inventory/mycluster/inventory.ini cluster.yml -b

Summary

OS : ubuntu 20.04, 22.04
k8s : 1.24.6
cni : flannel
cri : docker latest
kubespray : release-2.20

How to use this repository

1. install k8s and setup one master node using bootstrap.sh

#-----------------------------------
#
# do not run this script as root
#
#-----------------------------------

#!/bin/bash

IP=
CURRENT_DIR=$PWD

sudo docker login

# prerequisite
cd ~

**# disable firewall**
sudo systemctl stop ufw
sudo systemctl disable ufw

**# install basic packages**
sudo apt update
sudo apt install -y python3-pip

cat <<EOF | sudo tee -a /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
**net.ipv4.ip_forward                 = 1**
net.bridge.bridge-nf-call-ip6tables = 1
EOF

sudo sysctl --system

# ssh configuration
ssh-keygen -t rsa
ssh-copy-id -i ~/.ssh/id_rsa ${USER}@${IP}

# k8s installation via kubespray
git clone -b release-2.20 https://github.com/kubernetes-sigs/kubespray.git
cd kubespray
pip install -r requirements.txt

echo "export PATH=${HOME}/.local/bin:${PATH}" | sudo tee ${HOME}/.bashrc > /dev/null
export PATH=${HOME}/.local/bin:${PATH}
source ${HOME}/.bashrc

cp -rfp inventory/sample inventory/mycluster
declare -a IPS=(${IP})
CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}

# use docker container runtime
sed -i "s/docker_version: '20.10'/docker_version: 'latest'/g" roles/container-engine/docker/defaults/main.yml
sed -i "s/docker_containerd_version: 1.6.4/docker_containerd_version: latest/g" roles/download/defaults/main.yml
sed -i "s/container_manager: containerd/container_manager: docker/g" inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
sed -i "s/# container_manager: containerd/container_manager: docker/g" inventory/mycluster/group_vars/all/etcd.yml
sed -i "s/host_architecture }}]/host_architecture }} signed-by=\/etc\/apt\/keyrings\/docker.gpg]/g" roles/container-engine/docker/vars/ubuntu.yml
sed -i "s/# docker_cgroup_driver: systemd/docker_cgroup_driver: systemd/g" inventory/mycluster/group_vars/all/docker.yml
sed -i "s/etcd_deployment_type: host/etcd_deployment_type: docker/g" inventory/mycluster/group_vars/all/etcd.yml
sed -i "s/# docker_storage_options: -s overlay2/docker_storage_options: -s overlay2/g" inventory/mycluster/group_vars/all/docker.yml

# download docker gpg
sudo mkdir -m 0755 -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg

# change network plugin as flannel
sed -i "s/kube_network_plugin: calico/kube_network_plugin: flannel/g" roles/kubespray-defaults/defaults/main.yaml
sed -i "s/kube_network_plugin: calico/kube_network_plugin: flannel/g" inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml

# enable dashboard / disable dashboard login / change dashboard service as nodeport
sed -i "s/# dashboard_enabled: false/dashboard_enabled: true/g" inventory/mycluster/group_vars/k8s_cluster/addons.yml
sed -i "s/dashboard_skip_login: false/dashboard_skip_login: true/g" roles/kubernetes-apps/ansible/defaults/main.yml
sed -i'' -r -e "/targetPort: 8443/a\  type: NodePort" roles/kubernetes-apps/ansible/templates/dashboard.yml.j2

# enable helm
sed -i "s/helm_enabled: false/helm_enabled: true/g" inventory/mycluster/group_vars/k8s_cluster/addons.yml

# disable nodelocaldns
sed -i "s/enable_nodelocaldns: true/enable_nodelocaldns: false/g" inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml

# enable kubectl & kubeadm auto-completion
echo "source <(kubectl completion bash)" >> ${HOME}/.bashrc
echo "source <(kubeadm completion bash)" >> ${HOME}/.bashrc
echo "source <(kubectl completion bash)" | sudo tee -a /root/.bashrc
echo "source <(kubeadm completion bash)" | sudo tee -a /root/.bashrc
source ${HOME}/.bashrc

ansible-playbook -i inventory/mycluster/hosts.yaml  --become --become-user=root cluster.yml -K
sleep 30
cd ~

# enable kubectl in admin account and root
mkdir -p ${HOME}/.kube
sudo cp -i /etc/kubernetes/admin.conf ${HOME}/.kube/config
sudo chown ${USER}:${USER} ${HOME}/.kube/config

# create sa and clusterrolebinding of dashboard to get cluster-admin token
kubectl apply -f ${CURRENT_DIR}/sa.yaml
kubectl apply -f ${CURRENT_DIR}/clusterrolebinding.yaml

2. run 'add_node.sh' before adding a new node into the cluster

#-----------------------------------
#
# do not run this script as root
#
#-----------------------------------

#!/bin/bash

# prerequisite
cd ~

# disable firewall
sudo systemctl stop ufw
sudo systemctl disable ufw

# install basic packages
sudo apt update
sudo apt install -y nfs-common

cat <<EOF | sudo tee -a /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

sudo sysctl --system

# download docker gpg
sudo mkdir -m 0755 -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg

3. 'ssh-copy-id "remote-user"@"new-node"'

  • ssh-copy-id -i /home/tester/.ssh/id_rsa.pub tester@worker1

How to enter dashboard

1. kubectl create token -n kube-system admin-user

2. copy the token then paste it to the browser

# clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system

# sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system

How to add worker

1. inventory//hosts.yml에 추가할 노드 명시

2. ansible-playbook -i inventory//hosts.yml --become --become-user=root facts.yml -K

3-1. (option 1) ansible-playbook -i inventory//hosts.yml --become --become-user=root scale.yml -K

3-2. (option 2) ansible-playbook -i inventory//hosts.yml --become --become-user=root scale.yml -K

3-3. (option 3) ansible-playbook -i inventory//hosts.yml --become --become-user=root cluster.yml -K

How to add control plane(not etcd)

1. inventory//hosts.yml에 추가할 노드 명시

2. ansible-playbook -i inventory//hosts.yml --become --become-user=root cluster.yml -K

How to add control plane(also etcd)

1. inventory//hosts.yml에 추가할 노드 명시(반드시 etcd는 홀수가 되어야함)

2. ansible-playbook -i inventory//hosts.yml --become --become-user=root cluster.yml --limit=etcd,kube_control_plane -e ignore_assert_errors=yes -K

3. ansible-playbook -i inventory//hosts.yml --become --become-user=root upgrade-cluster.yml --limit=etcd,kube_control_plane -e ignore_assert_errors=yes -K

4. 모든 control plane 노드에서 /etc/kubernetes/manifests/kube-apiserver.yaml 안의 --etcd-servers 파라미터에 새로운 etcd를 추가

How to remove worker or control plant(not etcd)

1. inventory//hosts.yml에 삭제할 노드 명시

2-1. [online] ansible-playbook -i inventory//hosts.yml --become --become-user=root remove-node.yml -e node=<NODE_NAME> -K

2-2. [offline] ansible-playbook -i inventory//hosts.yml --become --become-user=root remove-node.yml -e node=<NODE_NAME> -e reset_nodes=false -e allow_ungraceful_removal=true -K

3. inventory//hosts.yml에 삭제된 노드 제거

How to remove and replace control plane(also etcd) when one is down

1. inventory//hosts.yml에 삭제할 노드 명시

2-1. [online] ansible-playbook -i inventory//hosts.yml --become --become-user=root remove-node.yml -e node=<NODE_NAME> -K

2-2. [unofficial][offline] ansible-playbook -i inventory//hosts.yml --become --become-user=root remove-node.yml -e node=NODE_NAME -e reset_nodes=false -e allow_ungraceful_removal=true -K

3. inventory//hosts.yml에서 삭제된 노드 제거 & 새로 추가될 노드 명시(etcd 노드 수는 홀수가 되어야함)

4. ansible-playbook -i inventory//hosts.yml --become --become-user=root cluster.yml -K

5. ansible-playbook -i inventory//hosts.yml --become --become-user=root upgrade-cluster.yml --limit=etcd,kube_control_plane -e ignore_assert_errors=yes -K

6. 모든 control plane 노드에서 /etc/kubernetes/manifests/kube-apiserver.yaml 안의 --etcd-servers 파라미터에 새로운 etcd 서버아이피 명시

How to replace first deployed control plane(etcd or not)

. inventory//hosts.yml의 [kube_control_plane] 항목에서 삭제할 control plane을 맨 아래로 배치

2-1. [online] ansible-playbook -i inventory//hosts.yml --become --become-user=root remove-node.yml -e node=<NODE_NAME> -K

2-2. [unofficial][offline] ansible-playbook -i inventory//hosts.yml --become --become-user=root remove-node.yml -e node=<NODE_NAME> -e reset_nodes=false -e allow_ungraceful_removal=true -K

3. "kubectl edit cm -n kube-public cluster-info"를 이용하여 [server] 필드에 있던 삭제된 control plane 노드의 아이피를 추가될 control plane 의 아이피로 변경. 인증서 변경했을 경우에 [certificate-authority-data] 필드로 변경

4. inventory//hosts.yml에서 삭제된 노드 제거 및 새로 추가될 노드 명시(etcd 노드 수는 홀수가 되어야함)

5. ansible-playbook -i inventory//hosts.yml --become --become-user=root cluster.yml --limit=etcd,kube_control_plane -K (모든 노드에 설정 파일 재생성)

How to delete entire cluster

1. ansible-playbook -i inventory//hosts.yml -b --become-user=root reset.yml -K

profile
All-rounder

0개의 댓글