실습환경
- CentOS 7.4 Virtual Machine (2vCPU, 4GB Mem, 100GB Disk) 3 node
- 1 master & 2 worker 구성
설치하기
모든 Node 공통 작업
$ sudo yum update -y
- SELINUX = permissive 설정
- /etc/sysconfig/selinux 파일을 열고 SELINUX=permissive 로 수정
- 통신 문제 방지
$ vi /etc/sysconfig/selinux
- IPTables 설정
- /ect/sysctl.d/k8s.conf 파일에서 ip6tables 관련 설정을 변경해주고 적용
- 참고) 만약 sysctl 커맨드를 찾을 수 없다면, .bash_profile 파일을 열고 PATH 끝에 :/sbin/ 추가
$ sudo bash -c 'cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF'
$ sudo sysctl --system
$ sudo lsmod | grep br_netfilter
# 참고 sysctl command 를 찾을 수 없다면, /sbin/ 추가
# PATH=$PATH:/sbin/
$ vi .bash_profile
$ source .bash_profile
$ sudo blkid
$ sudo swapoff /dev/mapper/centos-swap
$ sudo swapoff -a
# 도커 설치
$ sudo yum install -y docker
# 도커 활성화
$ sudo systemctl enable docker && sudo systemctl start docker
# 도커 버전 확인
$ sudo docker version
# kubernetes 최신 패키지를 사용하기위해 yum repository 구성
# kubernetes yum repository 를 구성하기 위한 설정 파일 작성
$ sudo bash -c 'cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF'
# 최신 버전 설치
$ sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
# kubelet 서비스 활성화
$ sudo systemctl enable kubelet && sudo systemctl start kubelet
master node 작업
- Firewall 설정
- 6443, 10250 포트를 방화벽에서 오픈하도록 설정
$ sudo firewall-cmd --permanent --add-port=6443/tcp && sudo firewall-cmd --permanent --add-port=10250/tcp && sudo firewall-cmd --reload
>> 안되면, firewall service 상태를 확인해보고 running 상태가 아니라면 restart 또는 start
$ systemctl status firewalld
$ systemctl start(or restart) firewalld
$ systemctl enable firewalld
- kubeadm 초기화 후 init
- 만약 위의 어느 단계를 빼먹었다면 init 이 제대로 되지않음. 그럴땐 조치 후 sudo kubeadm reset 을 한 뒤에 다시 init.
- (중요) 이때 kubeadm join ~ 으로 시작하는 조인 명령어를 꼭 따로 메모해둘것
$ sudo kubeadm config images pull
$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.0.0.1:6443 --token kvdhws.yt91a4da9ec3756h \
--discovery-token-ca-cert-hash sha256:ddf9f69fd9ad837ad3cb6db297334a7ef5c3cb0be1d4bfa53773faea1ad5cba3
- kubenetes 클러스터에 로컬로 액세스 할 수 있도록, 사용할 OS 유저에 설정 추가 (위 출력에 나와있는 3가지 command 실행)
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
worker node 작업
- kubeadm 초기화
- master node init 시에 출력된 kubeadm join 명령어 복붙하여 실행
$ sudo kubeadm join 10.0.0.1:6443 --token kvdhws.yt91a4da9ec3756h --discovery-token-ca-cert-hash sha256:ddf9f69fd9ad837ad3cb6db297334a7ef5c3cb0be1d4bfa53773faea1ad5cba3
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
정상적으로 붙었는지 확인
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
dev-psk-k8s001-ncl NotReady control-plane,master 6m33s v1.23.1
dev-psk-k8s002-ncl NotReady <none> 73s v1.23.1
$ kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-64897985d-m6dts 0/1 Pending 0 8m <none> <none> <none> <none>
kube-system coredns-64897985d-r2rdj 0/1 Pending 0 8m <none> <none> <none> <none>
kube-system etcd-dev-psk-k8s001-ncl 1/1 Running 0 8m13s 10.105.197.133 dev-psk-k8s001-ncl <none> <none>
kube-system kube-apiserver-dev-psk-k8s001-ncl 1/1 Running 0 8m13s 10.105.197.133 dev-psk-k8s001-ncl <none> <none>
kube-system kube-controller-manager-dev-psk-k8s001-ncl 1/1 Running 0 8m12s 10.105.197.133 dev-psk-k8s001-ncl <none> <none>
kube-system kube-proxy-6v64c 1/1 Running 0 2m56s 10.168.233.186 dev-psk-k8s002-ncl <none> <none>
kube-system kube-proxy-7p7st 1/1 Running 0 8m 10.105.197.133 dev-psk-k8s001-ncl <none> <none>
kube-system kube-scheduler-dev-psk-k8s001-ncl 1/1 Running 0 8m13s 10.105.197.133 dev-psk-k8s001-ncl <none> <none>
- node status가 NotReady 이고 coredns-* pod가 pending 상태일 때
# https://nirsa.tistory.com/292 참고해서 해결
# 아래 명령어를 입력하여 24라인의 loop 부분을 주석 처리
$ kubectl edit cm coredns -n kube-system
# pod에 네트워크 설정을 적용함
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# 아래와 같이 정상적으로 노드가 Ready 상태로 변경
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
dev-k8s001-ncl Ready control-plane,master 19m v1.23.1
dev-k8s002-ncl Ready <none> 13m v1.23.1
# coredns-* pod 도 Running
$ kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-64897985d-m6dts 1/1 Running 0 19m 10.244.0.2 dev-psk-k8s001-ncl <none> <none>
kube-system coredns-64897985d-r2rdj 1/1 Running 0 19m 10.244.0.3 dev-psk-k8s001-ncl <none> <none>
kube-system etcd-dev-psk-k8s001-ncl 1/1 Running 0 19m 10.105.197.133 dev-psk-k8s001-ncl <none> <none>
kube-system kube-apiserver-dev-psk-k8s001-ncl 1/1 Running 0 19m 10.105.197.133 dev-psk-k8s001-ncl <none> <none>
kube-system kube-controller-manager-dev-psk-k8s001-ncl 1/1 Running 0 19m 10.105.197.133 dev-psk-k8s001-ncl <none> <none>
kube-system kube-flannel-ds-dzvj4 1/1 Running 0 29s 10.105.197.133 dev-psk-k8s001-ncl <none> <none>
kube-system kube-flannel-ds-qqvv5 1/1 Running 0 29s 10.168.233.186 dev-psk-k8s002-ncl <none> <none>
kube-system kube-proxy-6v64c 1/1 Running 0 14m 10.168.233.186 dev-psk-k8s002-ncl <none> <none>
kube-system kube-proxy-7p7st 1/1 Running 0 19m 10.105.197.133 dev-psk-k8s001-ncl <none> <none>
kube-system kube-scheduler-dev-psk-k8s001-ncl 1/1 Running 0 19m 10.105.197.133 dev-psk-k8s001-ncl <none> <none>
나머지 worker node도 동일하게 붙이고, master node에서 확인
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
dev-k8s001-ncl Ready control-plane,master 21m v1.23.1
dev-k8s002-ncl Ready <none> 16m v1.23.1
dev-k8s003-ncl Ready <none> 34s v1.23.1
$ kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-64897985d-m6dts 1/1 Running 0 21m 10.244.0.2 dev-psk-k8s001-ncl <none> <none>
kube-system coredns-64897985d-r2rdj 1/1 Running 0 21m 10.244.0.3 dev-psk-k8s001-ncl <none> <none>
kube-system etcd-dev-psk-k8s001-ncl 1/1 Running 0 21m 10.105.197.133 dev-psk-k8s001-ncl <none> <none>
kube-system kube-apiserver-dev-psk-k8s001-ncl 1/1 Running 0 21m 10.105.197.133 dev-psk-k8s001-ncl <none> <none>
kube-system kube-controller-manager-dev-psk-k8s001-ncl 1/1 Running 0 21m 10.105.197.133 dev-psk-k8s001-ncl <none> <none>
kube-system kube-flannel-ds-dzvj4 1/1 Running 0 2m59s 10.105.197.133 dev-psk-k8s001-ncl <none> <none>
kube-system kube-flannel-ds-qqvv5 1/1 Running 0 2m59s 10.168.233.186 dev-psk-k8s002-ncl <none> <none>
kube-system kube-flannel-ds-t9ds4 1/1 Running 0 48s 10.106.218.64 dev-psk-k8s003-ncl <none> <none>
kube-system kube-proxy-6v64c 1/1 Running 0 16m 10.168.233.186 dev-psk-k8s002-ncl <none> <none>
kube-system kube-proxy-7p7st 1/1 Running 0 21m 10.105.197.133 dev-psk-k8s001-ncl <none> <none>
kube-system kube-proxy-cr9zd 1/1 Running 0 48s 10.106.218.64 dev-psk-k8s003-ncl <none> <none>
kube-system kube-scheduler-dev-psk-k8s001-ncl 1/1 Running 0 21m 10.105.197.133 dev-psk-k8s001-ncl <none> <none>