kubekey artifact file 생성

cloud2000·2023년 12월 30일
0

treasure

Artifact file 생성

  • kubekey binary를 다운로드 받은 후 offline에서 설치하기 위한 artifact파일을 생성한다.
  • kubernetes와 관련된 image는 https://github.com/kubesphere/ks-installer/releases에서 주요 release에만 포함되는 image-list.txt파일을 참고해야 한다.
  • kubesphere와 관련된 image 는 https://github.com/kubesphere/ks-installer/releases 에서 image-list.txt 파일을 참고하면 됨.
  • kubekey의 버전별로 kubernetes, kubesphere의 최신 지원 버전이 있으므로 kubekey/version/components.json, kubekey/cmd/kk/pkg/version/kubesphere/version_enum.go, kubekey/pkg/version/kubernetes/version_enum.go을 반드시 참고해야 한다.
  • default 버전에 대한 설정은 kubekey/cmd/kk/apis/kubekey/v1alpha2/default.go 파일에 있다

참고)

// kubekey download
$ curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
$ sudo cp kk /usr/local/bin

// artifact 정의 파일 생성
// https://kubesphere.io/docs/v3.4/installing-on-linux/introduction/air-gapped-installation/
$ cat > artifact-3.0.13.yaml <<EOF
---
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Manifest
metadata:
  name: artifact-v3.0.13
spec:
  arches:
  - amd64
  operatingSystems:
  - arch: amd64
    type: linux
    id: ubuntu
    version: "20.04"
    osImage: Ubuntu 20.04.5 LTS
    repository:
      iso:
        localPath: ""
        #url: "https://github.com/kubesphere/kubekey/releases/download/v2.0.0/ubuntu-20.04-amd64-debs.iso"
        url: "https://github.com/kubesphere/kubekey/releases/download/v3.0.7/ubuntu-20.04-debs-amd64.iso"
  kubernetesDistributions:
  - type: kubernetes
    version: v1.24.9
  components:
    helm:
      #version: v3.6.3
      version: v3.9.0
    cni:
      version: v0.9.1
    etcd:
      version: v3.4.13
    calicoctl:
      version: v3.23.2
    ## For now, if your cluster container runtime is containerd, KubeKey will add a docker 20.10.8 container runtime in the below list.
    ## The reason is KubeKey creates a cluster with containerd by installing a docker first and making kubelet connect the socket file of containerd which docker contained.
    containerRuntimes:
    - type: docker
      version: 20.10.8
    crictl:
      #version: v1.22.0
      version: v1.24.0
    ##
    # docker-registry:
    #   version: "2"
    harbor:
      #version: v2.4.1
      version: v2.5.3
    docker-compose:
      version: v2.2.2
  images:
# https://github.com/kubesphere/kubekey/blob/v3.0.13/version/components.json
# https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/images-list.txt
#  - docker.io/kubesphere/kube-apiserver:v1.24.9
#  - docker.io/kubesphere/kube-apiserver:v1.25.10
#  - docker.io/kubesphere/kube-apiserver:v1.26.5
#  - docker.io/kubesphere/kube-apiserver:v1.27.2
#  - docker.io/kubesphere/kube-controller-manager:v1.24.9
#  - docker.io/kubesphere/kube-controller-manager:v1.25.10
#  - docker.io/kubesphere/kube-controller-manager:v1.26.5
#  - docker.io/kubesphere/kube-controller-manager:v1.27.2
#  - docker.io/kubesphere/kube-scheduler:v1.24.9
#  - docker.io/kubesphere/kube-scheduler:v1.25.10
#  - docker.io/kubesphere/kube-scheduler:v1.26.5
#  - docker.io/kubesphere/kube-scheduler:v1.27.2
#  - docker.io/kubesphere/kube-proxy:v1.24.9
#  - docker.io/kubesphere/kube-proxy:v1.25.10
#  - docker.io/kubesphere/kube-proxy:v1.26.5
#  - docker.io/kubesphere/kube-proxy:v1.27.2
#  - docker.io/kubesphere/pause:3.8
#  - docker.io/kubesphere/pause:3.7
#  - docker.io/kubesphere/pause:3.6
#  - docker.io/kubesphere/pause:3.5
#  - docker.io/kubesphere/pause:3.4.1
#  - docker.io/coredns/coredns:1.8.0
#  - docker.io/coredns/coredns:1.8.6
#  - docker.io/calico/cni:v3.23.2
#  - docker.io/calico/kube-controllers:v3.23.2
#  - docker.io/calico/node:v3.23.2
#  - docker.io/calico/pod2daemon-flexvol:v3.23.2
#  - docker.io/calico/typha:v3.23.2
#  - docker.io/kubesphere/flannel:v0.12.0
#  - docker.io/openebs/provisioner-localpv:3.3.0
#  - docker.io/openebs/linux-utils:3.3.0
#  - docker.io/library/haproxy:2.3
#  - docker.io/kubesphere/nfs-subdir-external-provisioner:v4.0.2
#  - docker.io/kubesphere/k8s-dns-node-cache:1.15.12
## https://github.com/kubesphere/ks-installer/releases/download/v3.4.1/images-list.txt
###kubesphere-images
#  - docker.io/kubesphere/ks-installer:v3.4.1
#  - docker.io/kubesphere/ks-apiserver:v3.4.1
#  - docker.io/kubesphere/ks-console:v3.4.1
#  - docker.io/kubesphere/ks-controller-manager:v3.4.1
#  - docker.io/kubesphere/kubectl:v1.20.0
#  - docker.io/kubesphere/kubefed:v0.8.1
#  - docker.io/kubesphere/tower:v0.2.1
#  - docker.io/minio/minio:RELEASE.2019-08-07T01-59-21Z
#  - docker.io/minio/mc:RELEASE.2019-08-07T23-14-43Z
#  - docker.io/csiplugin/snapshot-controller:v4.0.0
#  - docker.io/kubesphere/nginx-ingress-controller:v1.3.1
#  - docker.io/mirrorgooglecontainers/defaultbackend-amd64:1.4
#  - docker.io/kubesphere/metrics-server:v0.4.2
#  - docker.io/library/redis:5.0.14-alpine
#  - docker.io/library/haproxy:2.0.25-alpine
#  - docker.io/library/alpine:3.14
#  - docker.io/osixia/openldap:1.3.0
#  - docker.io/kubesphere/netshoot:v1.0
###kubeedge-images
#  - docker.io/kubeedge/cloudcore:v1.13.0
#  - docker.io/kubesphere/iptables-manager:v1.13.0
#  - docker.io/kubesphere/edgeservice:v0.3.0
###gatekeeper-images
#  - docker.io/openpolicyagent/gatekeeper:v3.5.2
###openpitrix-images
#  - docker.io/kubesphere/openpitrix-jobs:v3.3.2
###kubesphere-devops-images
#  - docker.io/kubesphere/devops-apiserver:ks-v3.4.1
#  - docker.io/kubesphere/devops-controller:ks-v3.4.1
#  - docker.io/kubesphere/devops-tools:ks-v3.4.1
#  - docker.io/kubesphere/ks-jenkins:v3.4.0-2.319.3-1
#  - docker.io/jenkins/inbound-agent:4.10-2
#  - docker.io/kubesphere/builder-base:v3.2.2
#  - docker.io/kubesphere/builder-nodejs:v3.2.0
#  - docker.io/kubesphere/builder-maven:v3.2.0
#  - docker.io/kubesphere/builder-maven:v3.2.1-jdk11
#  - docker.io/kubesphere/builder-python:v3.2.0
#  - docker.io/kubesphere/builder-go:v3.2.0
#  - docker.io/kubesphere/builder-go:v3.2.2-1.16
#  - docker.io/kubesphere/builder-go:v3.2.2-1.17
#  - docker.io/kubesphere/builder-go:v3.2.2-1.18
#  - docker.io/kubesphere/builder-base:v3.2.2-podman
#  - docker.io/kubesphere/builder-nodejs:v3.2.0-podman
#  - docker.io/kubesphere/builder-maven:v3.2.0-podman
#  - docker.io/kubesphere/builder-maven:v3.2.1-jdk11-podman
#  - docker.io/kubesphere/builder-python:v3.2.0-podman
#  - docker.io/kubesphere/builder-go:v3.2.0-podman
#  - docker.io/kubesphere/builder-go:v3.2.2-1.16-podman
#  - docker.io/kubesphere/builder-go:v3.2.2-1.17-podman
#  - docker.io/kubesphere/builder-go:v3.2.2-1.18-podman
#  - docker.io/kubesphere/s2ioperator:v3.2.1
#  - docker.io/kubesphere/s2irun:v3.2.0
#  - docker.io/kubesphere/s2i-binary:v3.2.0
#  - docker.io/kubesphere/tomcat85-java11-centos7:v3.2.0
#  - docker.io/kubesphere/tomcat85-java11-runtime:v3.2.0
#  - docker.io/kubesphere/tomcat85-java8-centos7:v3.2.0
#  - docker.io/kubesphere/tomcat85-java8-runtime:v3.2.0
#  - docker.io/kubesphere/java-11-centos7:v3.2.0
#  - docker.io/kubesphere/java-8-centos7:v3.2.0
#  - docker.io/kubesphere/java-8-runtime:v3.2.0
#  - docker.io/kubesphere/java-11-runtime:v3.2.0
#  - docker.io/kubesphere/nodejs-8-centos7:v3.2.0
#  - docker.io/kubesphere/nodejs-6-centos7:v3.2.0
#  - docker.io/kubesphere/nodejs-4-centos7:v3.2.0
#  - docker.io/kubesphere/python-36-centos7:v3.2.0
#  - docker.io/kubesphere/python-35-centos7:v3.2.0
#  - docker.io/kubesphere/python-34-centos7:v3.2.0
#  - docker.io/kubesphere/python-27-centos7:v3.2.0
#  - quay.io/argoproj/argocd:v2.3.3
#  - quay.io/argoproj/argocd-applicationset:v0.4.1
#  - ghcr.io/dexidp/dex:v2.30.2
#  - docker.io/library/redis:6.2.6-alpine
###kubesphere-monitoring-images
#  - docker.io/jimmidyson/configmap-reload:v0.7.1
#  - docker.io/prom/prometheus:v2.39.1
#  - docker.io/kubesphere/prometheus-config-reloader:v0.55.1
#  - docker.io/kubesphere/prometheus-operator:v0.55.1
#  - docker.io/kubesphere/kube-rbac-proxy:v0.11.0
#  - docker.io/kubesphere/kube-state-metrics:v2.6.0
#  - docker.io/prom/node-exporter:v1.3.1
#  - docker.io/prom/alertmanager:v0.23.0
#  - docker.io/thanosio/thanos:v0.31.0
#  - docker.io/grafana/grafana:8.3.3
#  - docker.io/kubesphere/kube-rbac-proxy:v0.11.0
#  - docker.io/kubesphere/notification-manager-operator:v2.3.0
#  - docker.io/kubesphere/notification-manager:v2.3.0
#  - docker.io/kubesphere/notification-tenant-sidecar:v3.2.0
###kubesphere-logging-images
#  - docker.io/kubesphere/elasticsearch-curator:v5.7.6
#  - docker.io/kubesphere/opensearch-curator:v0.0.5
#  - docker.io/kubesphere/elasticsearch-oss:6.8.22
#  - docker.io/opensearchproject/opensearch:2.6.0
#  - docker.io/opensearchproject/opensearch-dashboards:2.6.0
#  - docker.io/kubesphere/fluentbit-operator:v0.14.0
#  - docker.io/library/docker:19.03
#  - docker.io/kubesphere/fluent-bit:v1.9.4
#  - docker.io/kubesphere/log-sidecar-injector:v1.2.0
#  - docker.io/elastic/filebeat:6.7.0
#  - docker.io/kubesphere/kube-events-operator:v0.6.0
#  - docker.io/kubesphere/kube-events-exporter:v0.6.0
#  - docker.io/kubesphere/kube-events-ruler:v0.6.0
#  - docker.io/kubesphere/kube-auditing-operator:v0.2.0
#  - docker.io/kubesphere/kube-auditing-webhook:v0.2.0
###istio-images
#  - docker.io/istio/pilot:1.14.6
#  - docker.io/istio/proxyv2:1.14.6
#  - docker.io/jaegertracing/jaeger-operator:1.29
#  - docker.io/jaegertracing/jaeger-agent:1.29
#  - docker.io/jaegertracing/jaeger-collector:1.29
#  - docker.io/jaegertracing/jaeger-query:1.29
#  - docker.io/jaegertracing/jaeger-es-index-cleaner:1.29
#  - docker.io/kubesphere/kiali-operator:v1.50.1
#  - docker.io/kubesphere/kiali:v1.50
###example-images
#  - docker.io/library/busybox:1.31.1
#  - docker.io/library/nginx:1.14-alpine
#  - docker.io/joosthofman/wget:1.0
#  - docker.io/nginxdemos/hello:plain-text
#  - docker.io/library/wordpress:4.8-apache
#  - docker.io/mirrorgooglecontainers/hpa-example:latest
#  - docker.io/fluent/fluentd:v1.4.2-2.0
#  - docker.io/library/perl:latest
#  - docker.io/kubesphere/examples-bookinfo-productpage-v1:1.16.2
#  - docker.io/kubesphere/examples-bookinfo-reviews-v1:1.16.2
#  - docker.io/kubesphere/examples-bookinfo-reviews-v2:1.16.2
#  - docker.io/kubesphere/examples-bookinfo-details-v1:1.16.2
#  - docker.io/kubesphere/examples-bookinfo-ratings-v1:1.16.3
##weave-scope-images
  - docker.io/weaveworks/scope:1.13.0
  registry:
    auths:
      "docker.io":
        username: "docker id"
        password: "docker passwd"
EOF

// artifact 압축파일 생성
$ kk artifact export -m artifact.yaml -o artifact.tar.gz
  • artifact 파일로 클러스터 설치하기
    클러스터 설정 파일(ut-cluster.yaml)을 생성한 후 아래 명령으로 설치한다.
// registry 먼저 설치
$ kk init registry -f ut-cluster.yaml -a artifact.tar.gz 

// cluster 설치
$ kk create cluster -f ut-cluster.yaml -a artifact.tar.gz

예외 상황

  • images에 image-list.txt파일의 내용을 그대로 붙여넣기 하고 kubekey artifact export를 하게 되면 아래와 같이 오류가 발생함으로 docker.io를 앞에 붙여준다.
$ ./kk-v3.0.13 artifact export -m artifact-v3.0.13.yaml -o artifact-v3.0.13.tar.gz

// --skip-push-images를 추가하면 harbor에 image를 push하는 과정으로 생략할 수 있다.
$ ./kk-v3.0.13 create cluster --skip-push-images -f mgmt-cluster.yaml -a artifact-v3.0.13.tar.gz


 _   __      _          _   __
| | / /     | |        | | / /
| |/ / _   _| |__   ___| |/ /  ___ _   _
|    \| | | | '_ \ / _ \    \ / _ \ | | |
| |\  \ |_| | |_) |  __/ |\  \  __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
                                    __/ |
                                   |___/

10:58:43 KST [CheckFileExist] Check output file if existed
10:58:43 KST success: [LocalHost]
10:58:43 KST [CopyImagesToLocalModule] Copy images to a local OCI path from registries
10:58:43 KST message: [LocalHost]
image kubesphere/kube-apiserver:v1.24.9 is invalid, image PATH need contain at least two slash-separated
10:58:43 KST failed: [LocalHost]
error: Pipeline[ArtifactExportPipeline] execute failed: Module[CopyImagesToLocalModule] exec failed:
failed: [LocalHost] [SaveImages] exec failed after 1 retries: image kubesphere/kube-apiserver:v1.24.9 is invalid, image PATH need contain at least two slash-separated
  • components.calicoctl 항목에 대해 설정하지 않으면 오류가 발생한다.

  • 설정한 kubernetes version이 지원하지 않는 버전일 경우 예외 발생하고 종료한다.

18:15:04 KST message: [LocalHost]
Failed to download kubeadm binary: curl -L -o /data/kubesphere/v3.0.13/kubekey/kube/v1.27.3/amd64/kubeadm https://storage.googleapis.com/kubernetes-release/release/v1.27.3/bin/linux/amd64/kubeadm error: No SHA256 found for kubeadm. v1.27.3 is not supported.
18:15:04 KST failed: [LocalHost]
error: Pipeline[CreateClusterPipeline] execute failed: Module[NodeBinariesModule] exec failed:
failed: [LocalHost] [DownloadBinaries] exec failed after 1 retries: Failed to download kubeadm binary: curl -L -o /data/kubesphere/v3.0.13/kubekey/kube/v1.27.3/amd64/kubeadm https://storage.googleapis.com/kubernetes-release/release/v1.27.3/bin/linux/amd64/kubeadm error: No SHA256 found for kubeadm. v1.27.3 is not supported.
  • Harbor 사설 인증서와 맞지 않으면 아래와 같이 x509 인증서가 발생한다.
E1231 08:59:00.492302    1615 remote_image.go:238] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"dockerhub.kubekey.local/kubesphereio/pause:3.8\": failed to resolve reference \"dockerhub.kubekey.local/kubesphereio/pause:3.8\": failed to do request: Head \"https://dockerhub.kubekey.local/v2/kubesphereio/pause/manifests/3.8\": x509: certificate signed by unknown authority" image="dockerhub.kubekey.local/kubesphereio/pause:3.8"

이 문제는 cluster를 구성하는 각 node에 harbor의 사설인증서가 없어서 발생한다. 이를 해결하기 위해서는 아래 디렉토리에 harbor ca 파일을 복사하고 update-ca-certificates 명령을 실행하여 시스템 전체에 harbor ca파일이 적용되도록 한다.

// harbor root ca파일을 클러스터를 구성하는 대상 node에 복사 
$ scp -i /home/vagrant/infra/id_rsa /usr/local/share/ca-certificates/harbor-ca.crt root@192.168.0.61:/usr/local/share/ca-certificates/harbor-ca.crt
$ scp -i /home/vagrant/infra/id_rsa /usr/local/share/ca-certificates/harbor-ca.crt root@192.168.0.62:/usr/local/share/ca-certificates/harbor-ca.crt

// 추가된 인증서를 시스템에 반영
$ sudo update-ca-certificates

// 인증서가 적용되었는지 확인
$ ls -lrt /etc/ssl/certs
lrwxrwxrwx 1 root root     46 Dec 31 09:25  harbor-ca.pem -> /usr/local/share/ca-certificates/harbor-ca.crt
-rw-r--r-- 1 root root 209670 Dec 31 09:25  ca-certificates.crt

// 대상 node에 containerd가 실행중인 상태였으면 반드시 restart를 해서 시스템에 적용된 인증서를 인식하도록 해야 한다
$ systemctl restart containerd
  • 클러스터 설치중에 버전을 v1.26.5로 헀더니 calico 관련 image의 버전을 v3.23.1을 찾는다. 추가 download하여 harbor에 upload함 ㅠㅠ
docker tag docker.io/calico/cni:v3.26.1 dockerhub.kubekey.local/kubesphereio/cni:v3.26.1
docker tag docker.io/calico/kube-controllers:v3.26.1 dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.26.1
docker tag docker.io/calico/node:v3.26.1 dockerhub.kubekey.local/kubesphereio/node:v3.26.1
docker tag docker.io/calico/pod2daemon-flexvol:v3.26.1 dockerhub.kubekey.local/kubesphereio/pod2daemon-flexvol:v3.26.1
docker tag docker.io/calico/typha:v3.26.1 dockerhub.kubekey.local/kubesphereio/typha:v3.26.1
  • k8s v1.25이상 버전에서는 TTLAfterFinished feature gate 설정하게 되면 오류가 발생한다.
Dec 31 09:58:39 node-61 kubelet[6673]: W1231 09:58:39.577522    6673 feature_gate.go:241] Setting GA feature gate CSIStorageCapacity=true. It will be removed in a future release.
Dec 31 09:58:39 node-61 kubelet[6673]: W1231 09:58:39.577557    6673 feature_gate.go:241] Setting GA feature gate ExpandCSIVolumes=true. It will be removed in a future release.
Dec 31 09:58:39 node-61 kubelet[6673]: E1231 09:58:39.577613    6673 run.go:74] "command failed" err="failed to set feature gates from initial flags-based config: unrecognized feature gate: TTLAfterFinished"
Dec 31 09:58:39 node-61 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 31 09:58:39 node-61 systemd[1]: kubelet.service: Failed with result 'exit-code'.
  • etcd cluster directory 오류 발생
Dec 31 10:02:50 node-61 etcd[8275]: check file permission: directory "/var/lib/etcd" exist, but the permission is "drwxr-xr-x". The recommended permission is "-rwx------" to prevent possible unprivileged access to the data.
Dec 31 10:02:50 node-61 etcd[8275]: cannot fetch cluster info from peer urls: could not retrieve cluster information from the given URLs
Dec 31 10:02:50 node-61 systemd[1]: etcd.service: Main process exited, code=exited, status=1/FAILURE
Dec 31 10:02:50 node-61 systemd[1]: etcd.service: Failed with result 'exit-code'.
Dec 31 10:02:50 node-61 systemd[1]: Failed to start etcd.
  • kubekey 정상 로그


 _   __      _          _   __
| | / /     | |        | | / /
| |/ / _   _| |__   ___| |/ /  ___ _   _
|    \| | | | '_ \ / _ \    \ / _ \ | | |
| |\  \ |_| | |_) |  __/ |\  \  __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
                                    __/ |
                                   |___/

10:12:57 KST [GreetingsModule] Greetings
10:12:58 KST message: [node-52]
Greetings, KubeKey!
10:12:59 KST message: [node-62]
Greetings, KubeKey!
10:13:00 KST message: [node-61]
Greetings, KubeKey!
10:13:00 KST success: [node-52]
10:13:00 KST success: [node-62]
10:13:00 KST success: [node-61]
10:13:00 KST [NodePreCheckModule] A pre-check on nodes
10:13:01 KST success: [node-61]
10:13:01 KST success: [node-62]
10:13:01 KST success: [node-52]
10:13:01 KST [ConfirmModule] Display confirmation form
+---------+------+------+---------+----------+-------+-------+---------+-----------+--------+---------+------------+------------+-------------+------------------+--------------+
| name    | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker  | containerd | nfs client | ceph client | glusterfs client | time         |
+---------+------+------+---------+----------+-------+-------+---------+-----------+--------+---------+------------+------------+-------------+------------------+--------------+
| node-62 | y    | y    | y       | y        | y     |       |         | y         | y      |         |            |            |             |                  | KST 10:13:01 |
| node-61 | y    | y    | y       | y        | y     |       |         | y         | y      |         |            |            |             |                  | KST 10:13:01 |
| node-52 | y    | y    | y       | y        | y     | y     | y       | y         | y      | 20.10.8 | v1.4.9     | y          |             |                  | KST 10:13:01 |
+---------+------+------+---------+----------+-------+-------+---------+-----------+--------+---------+------------+------------+-------------+------------------+--------------+

This is a simple check of your environment.
Before installation, ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]: yes
10:13:04 KST success: [LocalHost]
10:13:04 KST [UnArchiveArtifactModule] Check the KubeKey artifact md5 value
10:13:48 KST success: [LocalHost]
10:13:48 KST [UnArchiveArtifactModule] UnArchive the KubeKey artifact
10:13:48 KST skipped: [LocalHost]
10:13:48 KST [UnArchiveArtifactModule] Create the KubeKey artifact Md5 file
10:13:48 KST skipped: [LocalHost]
10:13:48 KST [NodeBinariesModule] Download installation binaries
10:13:48 KST message: [localhost]
downloading amd64 kubeadm v1.26.5 ...
10:13:50 KST message: [localhost]
kubeadm is existed
10:13:50 KST message: [localhost]
downloading amd64 kubelet v1.26.5 ...
10:13:51 KST message: [localhost]
kubelet is existed
10:13:51 KST message: [localhost]
downloading amd64 kubectl v1.26.5 ...
10:13:52 KST message: [localhost]
kubectl is existed
10:13:52 KST message: [localhost]
downloading amd64 helm v3.9.0 ...
10:13:53 KST message: [localhost]
helm is existed
10:13:53 KST message: [localhost]
downloading amd64 kubecni v1.2.0 ...
10:13:53 KST message: [localhost]
kubecni is existed
10:13:53 KST message: [localhost]
downloading amd64 crictl v1.24.0 ...
10:13:53 KST message: [localhost]
crictl is existed
10:13:53 KST message: [localhost]
downloading amd64 etcd v3.4.13 ...
10:13:54 KST message: [localhost]
etcd is existed
10:13:54 KST message: [localhost]
downloading amd64 containerd 1.6.4 ...
10:13:54 KST message: [localhost]
containerd is existed
10:13:54 KST message: [localhost]
downloading amd64 runc v1.1.1 ...
10:13:54 KST message: [localhost]
runc is existed
10:13:54 KST message: [localhost]
downloading amd64 calicoctl v3.26.1 ...
10:13:56 KST message: [localhost]
calicoctl is existed
10:13:56 KST success: [LocalHost]
10:13:56 KST [ConfigureOSModule] Get OS release
10:13:56 KST success: [node-61]
10:13:56 KST success: [node-62]
10:13:56 KST success: [node-52]
10:13:56 KST [ConfigureOSModule] Prepare to init OS
10:13:57 KST success: [node-62]
10:13:57 KST success: [node-61]
10:13:57 KST success: [node-52]
10:13:57 KST [ConfigureOSModule] Generate init os script
10:13:57 KST success: [node-61]
10:13:57 KST success: [node-62]
10:13:57 KST success: [node-52]
10:13:57 KST [ConfigureOSModule] Exec init os script
10:13:59 KST stdout: [node-61]
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.ip_forward = 1
net.ipv6.conf.all.disable_ipv6 = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
net.core.netdev_max_backlog = 65535
net.core.rmem_max = 33554432
net.core.wmem_max = 33554432
net.core.somaxconn = 32768
net.ipv4.tcp_max_syn_backlog = 1048576
net.ipv4.neigh.default.gc_thresh1 = 512
net.ipv4.neigh.default.gc_thresh2 = 2048
net.ipv4.neigh.default.gc_thresh3 = 4096
net.ipv4.tcp_retries2 = 15
net.ipv4.tcp_max_tw_buckets = 1048576
net.ipv4.tcp_max_orphans = 65535
net.ipv4.udp_rmem_min = 131072
net.ipv4.udp_wmem_min = 131072
net.ipv4.conf.all.arp_accept = 1
net.ipv4.conf.default.arp_accept = 1
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.default.arp_ignore = 1
vm.max_map_count = 262144
vm.swappiness = 0
vm.overcommit_memory = 0
fs.inotify.max_user_instances = 524288
fs.inotify.max_user_watches = 524288
fs.pipe-max-size = 4194304
fs.aio-max-nr = 262144
kernel.pid_max = 65535
kernel.watchdog_thresh = 5
kernel.hung_task_timeout_secs = 5
10:13:59 KST stdout: [node-62]
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.ip_forward = 1
net.ipv6.conf.all.disable_ipv6 = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
net.core.netdev_max_backlog = 65535
net.core.rmem_max = 33554432
net.core.wmem_max = 33554432
net.core.somaxconn = 32768
net.ipv4.tcp_max_syn_backlog = 1048576
net.ipv4.neigh.default.gc_thresh1 = 512
net.ipv4.neigh.default.gc_thresh2 = 2048
net.ipv4.neigh.default.gc_thresh3 = 4096
net.ipv4.tcp_retries2 = 15
net.ipv4.tcp_max_tw_buckets = 1048576
net.ipv4.tcp_max_orphans = 65535
net.ipv4.udp_rmem_min = 131072
net.ipv4.udp_wmem_min = 131072
net.ipv4.conf.all.arp_accept = 1
net.ipv4.conf.default.arp_accept = 1
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.default.arp_ignore = 1
vm.max_map_count = 262144
vm.swappiness = 0
vm.overcommit_memory = 0
fs.inotify.max_user_instances = 524288
fs.inotify.max_user_watches = 524288
fs.pipe-max-size = 4194304
fs.aio-max-nr = 262144
kernel.pid_max = 65535
kernel.watchdog_thresh = 5
kernel.hung_task_timeout_secs = 5
10:14:00 KST stdout: [node-52]
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.ip_forward = 1
net.ipv6.conf.all.disable_ipv6 = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 0
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
net.core.netdev_max_backlog = 65535
net.core.rmem_max = 33554432
net.core.wmem_max = 33554432
net.core.somaxconn = 32768
net.ipv4.tcp_max_syn_backlog = 1048576
net.ipv4.neigh.default.gc_thresh1 = 512
net.ipv4.neigh.default.gc_thresh2 = 2048
net.ipv4.neigh.default.gc_thresh3 = 4096
net.ipv4.tcp_retries2 = 15
net.ipv4.tcp_max_tw_buckets = 1048576
net.ipv4.tcp_max_orphans = 65535
net.ipv4.udp_rmem_min = 131072
net.ipv4.udp_wmem_min = 131072
net.ipv4.conf.all.arp_accept = 1
net.ipv4.conf.default.arp_accept = 1
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.default.arp_ignore = 1
vm.overcommit_memory = 0
fs.inotify.max_user_watches = 524288
fs.pipe-max-size = 4194304
fs.aio-max-nr = 262144
kernel.watchdog_thresh = 5
kernel.hung_task_timeout_secs = 5
10:14:00 KST success: [node-61]
10:14:00 KST success: [node-62]
10:14:00 KST success: [node-52]
10:14:00 KST [ConfigureOSModule] configure the ntp server for each node
10:14:00 KST skipped: [node-52]
10:14:00 KST skipped: [node-61]
10:14:00 KST skipped: [node-62]
10:14:00 KST [KubernetesStatusModule] Get kubernetes cluster status
10:14:00 KST success: [node-61]
10:14:00 KST [InstallContainerModule] Sync containerd binaries
10:14:04 KST success: [node-62]
10:14:04 KST success: [node-61]
10:14:04 KST [InstallContainerModule] Sync crictl binaries
10:14:06 KST success: [node-61]
10:14:06 KST success: [node-62]
10:14:06 KST [InstallContainerModule] Generate containerd service
10:14:06 KST success: [node-62]
10:14:06 KST success: [node-61]
10:14:06 KST [InstallContainerModule] Generate containerd config
10:14:06 KST success: [node-62]
10:14:06 KST success: [node-61]
10:14:06 KST [InstallContainerModule] Generate crictl config
10:14:06 KST success: [node-61]
10:14:06 KST success: [node-62]
10:14:06 KST [InstallContainerModule] Enable containerd
10:14:08 KST success: [node-61]
10:14:08 KST success: [node-62]
10:14:08 KST [PullModule] Start to pull images on all nodes
10:14:08 KST message: [node-61]
downloading image: dockerhub.kubekey.local/kubesphereio/pause:3.8
10:14:08 KST message: [node-62]
downloading image: dockerhub.kubekey.local/kubesphereio/pause:3.8
10:14:09 KST message: [node-61]
downloading image: dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.26.5
10:14:09 KST message: [node-62]
downloading image: dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.26.5
10:14:13 KST message: [node-62]
downloading image: dockerhub.kubekey.local/kubesphereio/coredns:1.9.3
10:14:15 KST message: [node-61]
downloading image: dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.26.5
10:14:15 KST message: [node-62]
downloading image: dockerhub.kubekey.local/kubesphereio/k8s-dns-node-cache:1.15.12
10:14:20 KST message: [node-61]
downloading image: dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.26.5
10:14:22 KST message: [node-62]
downloading image: dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.26.1
10:14:23 KST message: [node-61]
downloading image: dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.26.5
10:14:26 KST message: [node-62]
downloading image: dockerhub.kubekey.local/kubesphereio/cni:v3.26.1
10:14:26 KST message: [node-61]
downloading image: dockerhub.kubekey.local/kubesphereio/coredns:1.9.3
10:14:30 KST message: [node-61]
downloading image: dockerhub.kubekey.local/kubesphereio/k8s-dns-node-cache:1.15.12
10:14:36 KST message: [node-61]
downloading image: dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.26.1
10:14:37 KST message: [node-62]
downloading image: dockerhub.kubekey.local/kubesphereio/node:v3.26.1
10:14:40 KST message: [node-61]
downloading image: dockerhub.kubekey.local/kubesphereio/cni:v3.26.1
10:14:49 KST message: [node-62]
downloading image: dockerhub.kubekey.local/kubesphereio/pod2daemon-flexvol:v3.26.1
10:14:51 KST message: [node-62]
downloading image: dockerhub.kubekey.local/kubesphereio/haproxy:2.3
10:14:51 KST message: [node-61]
downloading image: dockerhub.kubekey.local/kubesphereio/node:v3.26.1
10:15:05 KST message: [node-61]
downloading image: dockerhub.kubekey.local/kubesphereio/pod2daemon-flexvol:v3.26.1
10:15:07 KST success: [node-52]
10:15:07 KST success: [node-62]
10:15:07 KST success: [node-61]
10:15:07 KST [ETCDPreCheckModule] Get etcd status
10:15:07 KST success: [node-61]
10:15:07 KST [CertsModule] Fetch etcd certs
10:15:07 KST success: [node-61]
10:15:07 KST [CertsModule] Generate etcd Certs
[certs] Using existing ca certificate authority
[certs] Using existing admin-node-61 certificate and key on disk
[certs] Using existing member-node-61 certificate and key on disk
[certs] Using existing node-node-61 certificate and key on disk
10:15:07 KST success: [LocalHost]
10:15:07 KST [CertsModule] Synchronize certs file
10:15:08 KST success: [node-61]
10:15:08 KST [CertsModule] Synchronize certs file to master
10:15:08 KST skipped: [node-61]
10:15:08 KST [InstallETCDBinaryModule] Install etcd using binary
10:15:10 KST success: [node-61]
10:15:10 KST [InstallETCDBinaryModule] Generate etcd service
10:15:10 KST success: [node-61]
10:15:10 KST [InstallETCDBinaryModule] Generate access address
10:15:10 KST success: [node-61]
10:15:10 KST [ETCDConfigureModule] Health check on exist etcd
10:15:10 KST skipped: [node-61]
10:15:10 KST [ETCDConfigureModule] Generate etcd.env config on new etcd
10:15:10 KST success: [node-61]
10:15:10 KST [ETCDConfigureModule] Refresh etcd.env config on all etcd
10:15:10 KST success: [node-61]
10:15:10 KST [ETCDConfigureModule] Restart etcd
10:15:12 KST success: [node-61]
10:15:12 KST [ETCDConfigureModule] Health check on all etcd
10:15:12 KST success: [node-61]
10:15:12 KST [ETCDConfigureModule] Refresh etcd.env config to exist mode on all etcd
10:15:12 KST success: [node-61]
10:15:12 KST [ETCDConfigureModule] Health check on all etcd
10:15:12 KST success: [node-61]
10:15:12 KST [ETCDBackupModule] Backup etcd data regularly
10:15:12 KST success: [node-61]
10:15:12 KST [ETCDBackupModule] Generate backup ETCD service
10:15:12 KST success: [node-61]
10:15:12 KST [ETCDBackupModule] Generate backup ETCD timer
10:15:13 KST success: [node-61]
10:15:13 KST [ETCDBackupModule] Enable backup etcd service
10:15:13 KST success: [node-61]
10:15:13 KST [InstallKubeBinariesModule] Synchronize kubernetes binaries
10:15:30 KST success: [node-61]
10:15:30 KST success: [node-62]
10:15:30 KST [InstallKubeBinariesModule] Change kubelet mode
10:15:30 KST success: [node-62]
10:15:30 KST success: [node-61]
10:15:30 KST [InstallKubeBinariesModule] Generate kubelet service
10:15:30 KST success: [node-62]
10:15:30 KST success: [node-61]
10:15:30 KST [InstallKubeBinariesModule] Enable kubelet service
10:15:31 KST success: [node-61]
10:15:31 KST success: [node-62]
10:15:31 KST [InstallKubeBinariesModule] Generate kubelet env
10:15:32 KST success: [node-62]
10:15:32 KST success: [node-61]
10:15:32 KST [InitKubernetesModule] Generate kubeadm config
10:15:32 KST success: [node-61]
10:15:32 KST [InitKubernetesModule] Generate audit policy
10:15:32 KST skipped: [node-61]
10:15:32 KST [InitKubernetesModule] Generate audit webhook
10:15:32 KST skipped: [node-61]
10:15:32 KST [InitKubernetesModule] Init cluster using kubeadm
10:16:01 KST stdout: [node-61]
W1231 10:15:32.629110   12412 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1231 10:15:32.632789   12412 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1231 10:15:32.639143   12412 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.26.5
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost node-52 node-52.cluster.local node-61 node-61.cluster.local node-62 node-62.cluster.local] and IPs [10.233.0.1 192.168.0.61 127.0.0.1 192.168.0.62 192.168.0.52]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 14.507088 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node node-61 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node node-61 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: nymslc.tcatut45ltnh6v4d
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join lb.kubesphere.local:6443 --token nymslc.tcatut45ltnh6v4d \
	--discovery-token-ca-cert-hash sha256:e743a64234a522b7951a4cbefa0efbe16cc5c44142558c6731a4115dbe872752 \
	--control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join lb.kubesphere.local:6443 --token nymslc.tcatut45ltnh6v4d \
	--discovery-token-ca-cert-hash sha256:e743a64234a522b7951a4cbefa0efbe16cc5c44142558c6731a4115dbe872752
10:16:01 KST success: [node-61]
10:16:01 KST [InitKubernetesModule] Copy admin.conf to ~/.kube/config
10:16:01 KST success: [node-61]
10:16:01 KST [InitKubernetesModule] Remove master taint
10:16:01 KST skipped: [node-61]
10:16:01 KST [ClusterDNSModule] Generate coredns service
10:16:02 KST success: [node-61]
10:16:02 KST [ClusterDNSModule] Override coredns service
10:16:02 KST stdout: [node-61]
service "kube-dns" deleted
10:16:06 KST stdout: [node-61]
service/coredns created
Warning: resource clusterroles/system:coredns is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrole.rbac.authorization.k8s.io/system:coredns configured
10:16:06 KST success: [node-61]
10:16:06 KST [ClusterDNSModule] Generate nodelocaldns
10:16:06 KST success: [node-61]
10:16:06 KST [ClusterDNSModule] Deploy nodelocaldns
10:16:06 KST stdout: [node-61]
serviceaccount/nodelocaldns created
daemonset.apps/nodelocaldns created
10:16:06 KST success: [node-61]
10:16:06 KST [ClusterDNSModule] Generate nodelocaldns configmap
10:16:07 KST success: [node-61]
10:16:07 KST [ClusterDNSModule] Apply nodelocaldns configmap
10:16:07 KST stdout: [node-61]
configmap/nodelocaldns created
10:16:07 KST success: [node-61]
10:16:07 KST [KubernetesStatusModule] Get kubernetes cluster status
10:16:08 KST stdout: [node-61]
v1.26.5
10:16:08 KST stdout: [node-61]
node-61   v1.26.5   [map[address:192.168.0.61 type:InternalIP] map[address:node-61 type:Hostname]]
10:16:08 KST stdout: [node-61]
W1231 10:16:08.428076   13189 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1231 10:16:08.432272   13189 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1231 10:16:08.435562   13189 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
cebf0fdcdd73d3f6b3416c924933f998d3b5936029bd064ec1bb08f84687cf11
10:16:08 KST stdout: [node-61]
secret/kubeadm-certs patched
10:16:08 KST stdout: [node-61]
secret/kubeadm-certs patched
10:16:09 KST stdout: [node-61]
secret/kubeadm-certs patched
10:16:09 KST stdout: [node-61]
7x9uiz.ytj58gxc9tttjvvv
10:16:09 KST success: [node-61]
10:16:09 KST [JoinNodesModule] Generate kubeadm config
10:16:09 KST skipped: [node-61]
10:16:09 KST success: [node-62]
10:16:09 KST [JoinNodesModule] Generate audit policy
10:16:09 KST skipped: [node-61]
10:16:09 KST [JoinNodesModule] Generate audit webhook
10:16:09 KST skipped: [node-61]
10:16:09 KST [JoinNodesModule] Join control-plane node
10:16:09 KST skipped: [node-61]
10:16:09 KST [JoinNodesModule] Join worker node
10:16:32 KST stdout: [node-62]
W1231 10:16:09.980914    8027 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1231 10:16:16.221352    8027 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
10:16:32 KST success: [node-62]
10:16:32 KST [JoinNodesModule] Copy admin.conf to ~/.kube/config
10:16:32 KST skipped: [node-61]
10:16:32 KST [JoinNodesModule] Remove master taint
10:16:32 KST skipped: [node-61]
10:16:32 KST [JoinNodesModule] Add worker label to all nodes
10:16:33 KST stdout: [node-61]
node/node-62 labeled
10:16:33 KST success: [node-61]
10:16:33 KST [InternalLoadbalancerModule] Generate haproxy.cfg
10:16:33 KST success: [node-62]
10:16:33 KST [InternalLoadbalancerModule] Calculate the MD5 value according to haproxy.cfg
10:16:33 KST success: [node-62]
10:16:33 KST [InternalLoadbalancerModule] Generate haproxy manifest
10:16:33 KST success: [node-62]
10:16:33 KST [InternalLoadbalancerModule] Update kubelet config
10:16:33 KST stdout: [node-61]
server: https://lb.kubesphere.local:6443
10:16:33 KST stdout: [node-62]
server: https://lb.kubesphere.local:6443
10:16:34 KST success: [node-62]
10:16:34 KST success: [node-61]
10:16:34 KST [InternalLoadbalancerModule] Update kube-proxy configmap
10:16:35 KST success: [node-61]
10:16:35 KST [InternalLoadbalancerModule] Update /etc/hosts
10:16:35 KST success: [node-62]
10:16:35 KST success: [node-61]
10:16:35 KST [DeployNetworkPluginModule] Generate calico
10:16:35 KST success: [node-61]
10:16:35 KST [DeployNetworkPluginModule] Deploy calico
10:16:37 KST stdout: [node-61]
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
serviceaccount/calico-cni-plugin created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created
10:16:37 KST success: [node-61]
10:16:37 KST [ConfigureKubernetesModule] Configure kubernetes
10:16:37 KST success: [node-61]
10:16:37 KST [ChownModule] Chown user $HOME/.kube dir
10:16:37 KST success: [node-62]
10:16:37 KST success: [node-61]
10:16:37 KST [AutoRenewCertsModule] Generate k8s certs renew script
10:16:37 KST success: [node-61]
10:16:37 KST [AutoRenewCertsModule] Generate k8s certs renew service
10:16:37 KST success: [node-61]
10:16:37 KST [AutoRenewCertsModule] Generate k8s certs renew timer
10:16:38 KST success: [node-61]
10:16:38 KST [AutoRenewCertsModule] Enable k8s certs renew service
10:16:39 KST success: [node-61]
10:16:39 KST [SaveKubeConfigModule] Save kube config as a configmap
10:16:39 KST success: [LocalHost]
10:16:39 KST [AddonsModule] Install addons
10:16:39 KST success: [LocalHost]
10:16:39 KST [DeployStorageClassModule] Generate OpenEBS manifest
10:16:39 KST success: [node-61]
10:16:39 KST [DeployStorageClassModule] Deploy OpenEBS as cluster default StorageClass
10:16:45 KST success: [node-61]
10:16:45 KST [DeployKubeSphereModule] Generate KubeSphere ks-installer crd manifests
10:16:45 KST success: [node-61]
10:16:45 KST [DeployKubeSphereModule] Apply ks-installer
10:16:46 KST stdout: [node-61]
namespace/kubesphere-system created
serviceaccount/ks-installer created
customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io created
clusterrole.rbac.authorization.k8s.io/ks-installer created
clusterrolebinding.rbac.authorization.k8s.io/ks-installer created
deployment.apps/ks-installer created
10:16:46 KST success: [node-61]
10:16:46 KST [DeployKubeSphereModule] Add config to ks-installer manifests
10:16:46 KST success: [node-61]
10:16:46 KST [DeployKubeSphereModule] Create the kubesphere namespace
10:16:47 KST success: [node-61]
10:16:47 KST [DeployKubeSphereModule] Setup ks-installer config
10:16:47 KST stdout: [node-61]
secret/kube-etcd-client-certs created
10:16:47 KST success: [node-61]
10:16:47 KST [DeployKubeSphereModule] Apply ks-installer
10:16:51 KST stdout: [node-61]
namespace/kubesphere-system unchanged
serviceaccount/ks-installer unchanged
customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io unchanged
clusterrole.rbac.authorization.k8s.io/ks-installer unchanged
clusterrolebinding.rbac.authorization.k8s.io/ks-installer unchanged
deployment.apps/ks-installer unchanged
clusterconfiguration.installer.kubesphere.io/ks-installer created
10:16:51 KST success: [node-61]
#####################################################
###              Welcome to KubeSphere!           ###
#####################################################

Console: http://192.168.0.61:30010
Account: admin
Password: P@88w0rd
NOTES:
  1. After you log into the console, please check the
     monitoring status of service components in
     "Cluster Management". If any service is not
     ready, please wait patiently until all components
     are up and running.
  2. Please change the default password after login.

#####################################################
https://kubesphere.io             2023-12-31 10:41:40
#####################################################
10:41:46 KST success: [node-61]
10:41:46 KST Pipeline[CreateClusterPipeline] execute successfully
Installation is complete.

Please check the result using the command:

	kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
profile
클라우드쟁이

0개의 댓글