μ΄λ²μ£Όλ VPC CNIλ₯Ό μ¬μ©ν μΏ λ²λ€ν°μ€ ν΄λ¬μ€ν° ꡬμ±μ μ§λλ²κ³Ό λ§μ°¬κ°μ§λ‘ kops λ₯Ό ν΅ν΄ AWS νκ²½μ ꡬμ±νκ² λλ€.
PODκ° ν΅μ μ ν¨ν·λ€νλ₯Ό ν΅ν΄ νμΈνκ² λλ©°,
VPC CNIμμ νλ μμ± κ°μμ μ νλ νμΈνκ² λλ€.
AWS EBSλ₯Ό ν΅ν΄ PV, PVC λ₯Ό λ€λ£¨λ©°
λ³Όλ₯¨ μ€λ
μ·λ νμΈνλ€.
μΆκ°λ‘ AWS EFS,FSx,File cache λ νμΈνμ.
μ€μ΅νκ²½ ꡬμ±μ μμ κ°μλ€λμ΄ μ 곡ν΄μ£Όμλ νλ°©μ€μΉ μ€ν¬λ¦½νΈ (kops-oneclick-f1.yaml
)μ νλ² λΆμν΄λ³΄μ. (μ΄λ°κ² μ¬λ―Έμ§μ)
curl -O https://s3.ap-northeast-2.amazonaws.com/cloudformation.cloudneta.net/K8S/kops-oneclick-f1.yaml
# AWS cloudformation ν
νλ¦ΏμΌλ‘ ꡬμ±λμ΄μμΌλ©° λ³ΈμΈμ openstack heat templateμ μ΅μνμ¬
λλ΅ νμ
ν μ μμ λ― νλ€. (μ°Έκ³ : https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html )
AWSTemplateFormatVersion: '2010-09-09'
...
# νλΌλ―Έν°κ°μ keypair, IAMμ 보, S3, λ
Έλ κ°μ, VPC block μ 보 κ·Έλ¦¬κ³ λ°°ν¬ νκ² λ¦¬μ λ±μ΄ μ€μ λλ€.
...
## Resources λ AWS μμ κ° μμμ μμ±νλ λΆλΆμ΄λ©° Ref ν¨μλ₯Ό μ΄μ©νμ¬ λ€λ₯Έ 리μμ€λ₯Ό μ°Έμ‘°ν μ μλ€.
Resources:
## VPCλ₯Ό μμ±νλ©° λ€νΈμν¬ λμμ μ μΈνλ€.
MyVPC:
Type: AWS::EC2::VPC
Properties:
EnableDnsSupport: true
EnableDnsHostnames: true
CidrBlock: 10.0.0.0/16
Tags:
- Key: Name
Value: My-VPC
## internet gatewayλ₯Ό μμ±νλ€
MyIGW:
Type: AWS::EC2::InternetGateway
Properties:
Tags:
- Key: Name
Value: My-IGW
## μμ μμ±λ Internet Gatewayλ₯Ό λ΄ VPCμ μ°κ²°νλ€.
MyIGWAttachment:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
InternetGatewayId: !Ref MyIGW
VpcId: !Ref MyVPC
## "MyPVC"μ λΌμ°ν
ν
μ΄λΈμ μμ±νλ€.
MyPublicRT:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref MyVPC
Tags:
- Key: Name
Value: My-Public-RT
## 0.0.0.0/0 λμμ λν Default gatewayλ₯Ό μ€μ νλ€.
## μ΄λ μμ μμ±ν internet gatewayλ₯Ό default gatewayλ‘ μ€μ νλ€.
DefaultPublicRoute:
Type: AWS::EC2::Route
DependsOn: MyIGWAttachment
Properties:
RouteTableId: !Ref MyPublicRT
DestinationCidrBlock: 0.0.0.0/0
GatewayId: !Ref MyIGW
## "MyPVC"μ μλΈλ·μ μ μΈνλ©°, AZλ GetAZs ν¨μλ₯Ό μ΄μ©ν΄μ arrayλ‘ λ¦¬ν΄λ 첫λ²μ§Έ κ°μΌλ‘ μ€μ νλ€.
MyPublicSN:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref MyVPC
AvailabilityZone: !Select [ 0, !GetAZs '' ]
CidrBlock: 10.0.0.0/24
Tags:
- Key: Name
Value: My-Public-SN
## MyPublicSN μλΈλ·μ μμ μμ±ν λΌμ°ν
ν
μ΄λΈμ ν λΉνλ€.
MyPublicSNRouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
RouteTableId: !Ref MyPublicRT
SubnetId: !Ref MyPublicSN
## ec2 sec group μ€μ : νλΌλ―Έν°(SgIngressSshCidr=$(curl -s ipinfo.io/ip)/32)
λ‘ λ°κ²λλ SgIngressSshCidr λμλ§ 22,80 ν¬νΈμ νμ©ν΄μ€λ€.
KOPSEC2SG:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: kops ec2 Security Group
VpcId: !Ref MyVPC
Tags:
- Key: Name
Value: KOPS-EC2-SG
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: '22'
ToPort: '22'
CidrIp: !Ref SgIngressSshCidr
- IpProtocol: tcp
FromPort: '80'
ToPort: '80'
CidrIp: !Ref SgIngressSshCidr
# EC2 μμ± λΆλΆ : t3.small νμ
μ μΈμ€ν΄μ€, νλΌλ―Έν°λ‘ λ°λ ami μ΄λ―Έμ§μ keypairλ₯Ό μ§μ νλ©°,
μμ μμ±ν μλΈλ·μ λ€νΈμν¬μΈν°νμ΄μ€λ₯Ό μμ±νλ€.
KOPSEC2:
Type: AWS::EC2::Instance
Properties:
InstanceType: t3.small
ImageId: !Ref LatestAmiId
KeyName: !Ref KeyName
Tags:
- Key: Name
Value: kops-ec2
NetworkInterfaces:
- DeviceIndex: 0
SubnetId: !Ref MyPublicSN
GroupSet:
- !Ref KOPSEC2SG
AssociatePublicIpAddress: true
PrivateIpAddress: 10.0.0.10
## μΈμ€ν΄μ€κ° κΈ°λλ ν GuestOSλ΄μμ μ€νλλ μ€ν¬λ¦½νΈ λ΄μ©μ΄λ€. cloud-initμ μν΄ μ€νλλ€.
UserData:
## userdata λ¬Έμμ΄μ Base64λ‘ μΈμ½λ©νλ€.
Fn::Base64:
!Sub |
#!/bin/bash
## hostname λ³κ²½
hostnamectl --static set-hostname kops-ec2
# Change Timezone
ln -sf /usr/share/zoneinfo/Asia/Seoul /etc/localtime
# Install Packages
cd /root
yum -y install tree jq git htop
## kubectl latest stable λ²μ μ λ΄λ € λ°μ μ€μΉ.
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
## μ΅μ λ²μ μ kopsλ₯Ό λ΄λ €λ°μ μ€μΉνλ€.
curl -Lo kops https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
chmod +x kops
mv kops /usr/local/bin/kops
## aws client λ₯Ό λ°μ μ€μΉνλ€.
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip >/dev/null 2>&1
sudo ./aws/install
export PATH=/usr/local/bin:$PATH
source ~/.bash_profile
## aws bash auto complition μ€μΉ
complete -C '/usr/local/bin/aws_completer' aws
## ssh rsa ν€ μμ± (passwordμμ΄)
ssh-keygen -t rsa -N "" -f /root/.ssh/id_rsa
echo 'alias vi=vim' >> /etc/profile
## ec2-user λ‘ λ‘κ·ΈμΈμ λ°λ‘ root μ μ λ‘ μ€μμΉ μ€μ
echo 'sudo su -' >> /home/ec2-user/.bashrc
## helm3, yh(yaml νμ΄λΌμ΄νΈ) λ€μ΄λ‘λ λ° μ€μΉ
curl -s https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
wget https://github.com/andreazorzetto/yh/releases/download/v0.4.0/yh-linux-amd64.zip
unzip yh-linux-amd64.zip
mv yh /usr/local/bin/
## K8S Version νλΌλ―Έν° λ³μ KubernetesVersion μ¬μ©.
export KUBERNETES_VERSION=${KubernetesVersion}
echo "export KUBERNETES_VERSION=${KubernetesVersion}" >> /etc/profile
## IAM User Credentials νλΌλ―Έν° λ³μλ₯Ό μ΄μ©ν΄ iam μ€μ .
export AWS_ACCESS_KEY_ID=${MyIamUserAccessKeyID}
export AWS_SECRET_ACCESS_KEY=${MyIamUserSecretAccessKey}
export AWS_DEFAULT_REGION=${AWS::Region}
export ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' --output text)
echo "export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID" >> /etc/profile
echo "export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY" >> /etc/profile
echo "export AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION" >> /etc/profile
echo 'export AWS_PAGER=""' >>/etc/profile
echo "export ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' --output text)" >> /etc/profile
## CLUSTER_NAME νλΌλ―Έν° λ³μ ClusterBaseName μ¬μ©.
export KOPS_CLUSTER_NAME=${ClusterBaseName}
echo "export KOPS_CLUSTER_NAME=$KOPS_CLUSTER_NAME" >> /etc/profile
## S3 State Store Bucket Name μ§μ
export KOPS_STATE_STORE=s3://${S3StateStore}
echo "export KOPS_STATE_STORE=s3://${S3StateStore}" >> /etc/profile
## κ°μλ€λμ PKOS κΉνλΈ ν΄λ‘
git clone https://github.com/gasida/PKOS.git /root/pkos
## kubectl plugin managerμΈ "krew" λ€μ΄λ‘λ λ° μ€μΉ
curl -LO https://github.com/kubernetes-sigs/krew/releases/download/v0.4.3/krew-linux_amd64.tar.gz
tar zxvf krew-linux_amd64.tar.gz
./krew-linux_amd64 install krew
export PATH="$PATH:/root/.krew/bin"
echo 'export PATH="$PATH:/root/.krew/bin"' >> /etc/profile
## kubectl autocompletion μ€μ λ° Install kube-ps1 (context,ns νμν΄)
echo 'source <(kubectl completion bash)' >> /etc/profile
echo 'alias k=kubectl' >> /etc/profile
echo 'complete -F __start_kubectl k' >> /etc/profile
git clone https://github.com/jonmosco/kube-ps1.git /root/kube-ps1
cat <<"EOT" >> /root/.bash_profile
source /root/kube-ps1/kube-ps1.sh
KUBE_PS1_SYMBOL_ENABLE=false
function get_cluster_short() {
echo "$1" | cut -d . -f1
}
KUBE_PS1_CLUSTER_FUNCTION=get_cluster_short
KUBE_PS1_SUFFIX=') '
PS1='$(kube_ps1)'$PS1
EOT
## Install krew plugin
## ctx (context μ ν), ns (namespaceμ ν)
## get-all (get all λ³΄λ€ λ§μ λ²μμ μ€λΈμ νΈ νμΈκ°λ₯)
## ktop (λΉμ£ΌμΌλΌμ΄μ¦ λ κ΄λ¦¬ν΄ like k9s)
## df-pv (κ° pvλ³ μ¬μ©λ νμΈ)
## mtail (λ€μ pod μ λ‘κ·Έ tail )
## tree (μ€λΈμ νΈ νΈλ¦¬κ΅¬μ‘° νν)
kubectl krew install ctx ns get-all ktop # df-pv mtail tree
## Install Docker
amazon-linux-extras install docker -y
systemctl start docker && systemctl enable docker
## kops μ¬μ©νμ¬ ec2μΈμ€ν΄μ€ μμ± λ° k8s ν΄λ¬μ€ν° λνλ‘μ΄ νλ κ³Όμ μ΄λ©°
## dry-runμΌλ‘ kops.yaml μ μ°μ μμ±νλ€.
kops create cluster --zones=${AvailabilityZone1},${AvailabilityZone2} --networking amazonvpc --cloud aws \
--master-size ${MasterNodeInstanceType} --node-size ${WorkerNodeInstanceType} --node-count=${WorkerNodeCount} \
--network-cidr ${VpcBlock} --ssh-public-key ~/.ssh/id_rsa.pub --kubernetes-version "${KubernetesVersion}" --dry-run \
--output yaml > kops.yaml
## μλ μ€μ μ μΆκ°ν΄μ€λ€.
cat <<EOT > addon.yaml
certManager:
enabled: true
awsLoadBalancerController:
enabled: true
externalDns:
provider: external-dns
metricsServer:
enabled: true
kubeProxy:
metricsBindAddress: 0.0.0.0
kubeDNS:
provider: CoreDNS
nodeLocalDNS:
enabled: true
memoryRequest: 5Mi
cpuRequest: 25m
EOT
sed -i -n -e '/aws$/r addon.yaml' -e '1,$p' kops.yaml
## max-pod per node μ€μ λ μΆκ°
cat <<EOT > maxpod.yaml
maxPods: 100
EOT
sed -i -n -e '/anonymousAuth/r maxpod.yaml' -e '1,$p' kops.yaml
## vpc ENABLE_PREFIX_DELEGATION μ€μ μΆκ°
sed -i 's/amazonvpc: {}/amazonvpc:/g' kops.yaml
cat <<EOT > awsvpc.yaml
env:
- name: ENABLE_PREFIX_DELEGATION
value: "true"
EOT
sed -i -n -e '/amazonvpc/r awsvpc.yaml' -e '1,$p' kops.yaml
## μ€λΉν kops.yaml μ μ΄μ©ν΄μ clusterλ₯Ό μμ±νλ€.
cat kops.yaml | kops create -f -
kops update cluster --name $KOPS_CLUSTER_NAME --ssh-public-key ~/.ssh/id_rsa.pub --yes
## kopsλ‘ μμ±λ k8s clusterμ kubeconfig λ₯Ό μ¬μ©νλλ‘ μ€μ
echo "kops export kubeconfig --admin" >> /etc/profile
## cloudformation μμ μ¬μ©κ°λ₯ν output μ€μ : νμ¬ μμ±λ ec2μΈμ€ν΄μ€μ PublicIPλ₯Ό λ³μλ‘ λ°μ μΆλ ₯λλλ‘ μ€μ λ¨.
Outputs:
KOPSEC2IP:
Value: !GetAtt KOPSEC2.PublicIp
μ μ΄μ μ€μ΅νκ²½μ λ°°ν¬ν΄λ³΄μ. μ΄λ² μ€μ΅μλ κ³ μ¬μ μΈμ€ν΄μ€ c5d νμ μ΄ μ¬μ©λλ―λ‘ μ¦, λΉμΈλ€. μ€μ΅ν λ λ°°ν¬νκ³ , μ¬μ©νμ§ μμλ μμ νκ³ κΆκΈνλ©΄ λ€μ λ°°ν¬ν΄μ νμΈνκ³ κ·Έλ κ² νλλ‘ νμ. (μκ»΄μΌ μμ°λ€.)
## kops YAML νμΌ λ€μ΄λ‘λ
curl -O https://s3.ap-northeast-2.amazonaws.com/cloudformation.cloudneta.net/K8S/kops-oneclick-f1.yaml
## CloudFormation μ€ν λ°°ν¬ : λ
Έλ μΈμ€ν΄μ€ νμ
λ³κ²½ - MasterNodeInstanceType=t3.medium WorkerNodeInstanceType=c5d.large
# "kops-oneclick-f1.sh" νμΌμ μλ λ΄μ©μΌλ‘ λ§λ€μ΄μ€λ€. (νμμ μ¬μ¬μ©)
aws cloudformation deploy --template-file kops-oneclick-f1.yaml --stack-name mykops --parameter-overrides \
KeyName=spark SgIngressSshCidr=$(curl -s ipinfo.io/ip)/32 \
MyIamUserAccessKeyID=AKI..57W MyIamUserSecretAccessKey='F4KdY..og2d' \
ClusterBaseName='sparkandassociates.net' S3StateStore='pkos2' \
MasterNodeInstanceType=t3.medium WorkerNodeInstanceType=c5d.large \
--region ap-northeast-2
# CloudFormation μ€ν λ°°ν¬ μλ£ ν kOps EC2 IP μΆλ ₯ (yaml μ outputs μ μλ k/v μ¬μ©)
aws cloudformation describe-stacks --stack-name mykops --query 'Stacks[*].Outputs[0].OutputValue' --output text
# ssh μ μκ°λ₯νμ§ νμΈ.
ssh -i ./spark.pem ec2-user@$(aws cloudformation describe-stacks --stack-name mykops --query 'Stacks[*].Outputs[0].OutputValue' --output text)
## post script μ§νκ³Όμ νμΈμ cloud-init-output.log λ‘κ·Έν΅ν΄ κ°λ₯νλ€. (kops-ec2 λ
Έλ)
## 15λΆλ€ k8s λ
Έλ λ§μ€ν°,μ컀2λ λ°°ν¬κ° λμ§μμ κ²½μ° μλ¬λ‘κ·Έ νμΈ.
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
I0314 14:42:18.254922 3105 create_cluster.go:878] Using SSH public key: /root/.ssh/id_rsa.pub
Error: cluster "sparkandassociates.net" already exists; use 'kops update cluster' to apply changes
Error: error parsing file "-": Object 'Kind' is missing in 'null'
--ssh-public-key on update is deprecated - please use `kops create secret --name sparkandassociates.net sshpublickey admin -i ~/.ssh/id_rsa.pub` instead
I0314 14:42:18.661378 3125 update_cluster.go:238] Using SSH public key: /root/.ssh/id_rsa.pub
Error: exactly one 'admin' SSH public key can be specified when running with AWS; please delete a key using `kops delete secret`
Cloud-init v. 19.3-46.amzn2 finished at Tue, 14 Mar 2023 05:42:25 +0000. Datasource DataSourceEc2. Up 695.61 seconds
sparkandassociates.net ν΄λ¬μ€ν°κ° μ΄λ―Έ μλ€κ³ λμ€λλ°, λͺλ² μ¬μ€μΉ κ³Όμ μμ μ°κΊΌκΈ°κ° λ¨μμ κ·Έλ°λ―νλ€. νμΈν΄λ³΄μ.
s3 pkos2 λ²ν·λ΄ ν΄λ¬μ€ν° μ λ³΄κ° λ¨μμμ.
# s3 bucket μμ ν λ€μ μμ±.
[root@san-1 pkos]# aws s3 mb s3://pkos2 --region ap-northeast-2
make_bucket: pkos2
[root@san-1 pkos]# aws s3 ls
2023-03-14 06:23:54 pkos2
λλ²μ§Έ λ¬Έμ
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
/var/lib/cloud/instance/scripts/part-001: line 90: 3104 Segmentation fault kops create cluster --zones=ap-northeast-2a,ap-northeast-2c --networking amazonvpc --cloud aws --master-size t3.medium --node-size c5d.large --node-count=2 --network-cidr 172.30.0.0/16 --ssh-public-key ~/.ssh/id_rsa.pub --kubernetes-version "1.24.11" --dry-run --output yaml > kops.yaml
/var/lib/cloud/instance/scripts/part-001: line 127: 3115 Done cat kops.yaml
3116 Segmentation fault | kops create -f -
/var/lib/cloud/instance/scripts/part-001: line 128: 3117 Segmentation fault kops update cluster --name $KOPS_CLUSTER_NAME --ssh-public-key ~/.ssh/id_rsa.pub --yes
Cloud-init v. 19.3-46.amzn2 finished at Tue, 14 Mar 2023 06:41:53 +0000. Datasource DataSourceEc2. Up 999.85 seconds
dry-run μ€ν¨.
## cloud-init μΌλ‘ λ³μ νμ±λμ΄ λ§λ€μ΄μ§ μ€ν¬λ¦½νΈ λ΄μ© νμΈ.
/var/lib/cloud/instance/scripts/part-001
[root@kops-ec2 ~]# cat kops.yaml
[root@kops-ec2 ~]#
route53λ΄μμ λ§λ€μ΄μ§ A-record μμ ν μ μμ μΌλ‘ μμ±λ¨.
cloud-init-output.log
νμΌ νμΈ
kOps has set your kubectl context to sparkandassociates.net
W0315 13:37:59.908065 3042 update_cluster.go:347] Exported kubeconfig with no user authentication; use --admin, --user or --auth-plugin flags with `kops export kubeconfig`
Cluster is starting. It should be ready in a few minutes.
Suggestions:
* validate cluster: kops validate cluster --wait 10m
* list nodes: kubectl get nodes --show-labels
* ssh to a control-plane node: ssh -i ~/.ssh/id_rsa ubuntu@api.sparkandassociates.net
* the ubuntu user is specific to Ubuntu. If not using Ubuntu please use the appropriate user based on your OS.
* read about installing addons at: https://kops.sigs.k8s.io/addons.
Cloud-init v. 19.3-46.amzn2 finished at Wed, 15 Mar 2023 04:37:59 +0000. Datasource DataSourceEc2. Up 125.37 seconds
kops validate cluster ν΅ν΄μ μ§νμν© νμΈκ°λ₯νλ€.
(sparkandassociates:N/A) [root@kops-ec2 ~]# kops validate cluster --wait 10m
Validating cluster sparkandassociates.net
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
control-plane-ap-northeast-2a ControlPlane t3.medium 1 1 ap-northeast-2a
nodes-ap-northeast-2a Node c5d.large 1 1 ap-northeast-2a
nodes-ap-northeast-2c Node c5d.large 1 1 ap-northeast-2c
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The external-dns Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a control plane node to start, external-dns to launch, and DNS to propagate. The protokube container and external-dns deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
W0315 13:44:52.957150 4265 validate_cluster.go:232] (will retry): cluster not yet healthy
## μμ±μλ£ νμΈ.
(sparkandassociates:N/A) [root@kops-ec2 ~]# kops validate cluster --wait 10m
Validating cluster sparkandassociates.net
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
control-plane-ap-northeast-2a ControlPlane t3.medium 1 1 ap-northeast-2a
nodes-ap-northeast-2a Node c5d.large 1 1 ap-northeast-2a
nodes-ap-northeast-2c Node c5d.large 1 1 ap-northeast-2c
NODE STATUS
NAME ROLE READY
i-05538a0cedc2ceac8 node True
i-08a4f488be204357c control-plane True
i-0f94a3e2d9abe939f node True
Your cluster sparkandassociates.net is ready
# λ©νΈλ¦ μλ² νμΈ : λ©νΈλ¦μ 15μ΄ κ°κ²©μΌλ‘ cAdvisorλ₯Ό ν΅νμ¬ κ°μ Έμ΄
kubectl top node
(sparkandassociates:N/A) [root@kops-ec2 ~]# k top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
i-05538a0cedc2ceac8 32m 1% 1009Mi 28%
i-08a4f488be204357c 192m 9% 2027Mi 53%
i-0f94a3e2d9abe939f 24m 1% 965Mi 26%
# limit range κΈ°λ³Έμ μ±
μ΄ 100m *μ΅μ 0.1CPU λ₯Ό κ°λ°ν°νλ―λ‘
# ν
μ€νΈλ‘ 100κ° podμ νλμ μ컀λ
Έλμμ κΈ°λμν¬λ μ΄λΆλΆμ 걸리λ―λ‘ ν
μ€νΈλ₯Ό μν΄μ μμ .
(sparkandassociates:N/A) [root@kops-ec2 ~]# kubectl describe limitranges
Name: limits
Namespace: default
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container cpu - - 100m - -
(sparkandassociates:N/A) [root@kops-ec2 ~]# kubectl delete limitranges limits
limitrange "limits" deleted
(sparkandassociates:N/A) [root@kops-ec2 ~]# kubectl get limitranges
No resources found in default namespace.
(sparkandassociates:N/A) [root@kops-ec2 ~]#
μ, μ΄μ μμ
λ€μμΌλ‘ λ€νΈμν¬μ΄λ€.
μΌλ¨ μλ κ·Έλ¦Όνλλ‘ λ°λ‘ VPC CNI μ λν΄ νμ μ΄ κ°λ₯νλ€.
μΆμ² - PKOS μλ£λ΄
μ§μ νμΈν΄λ³΄μ.
# CNI μ 보 νμΈ
(sparkandassociates:N/A) [root@kops-ec2 ~]# kubectl describe daemonset aws-node --namespace kube-system | grep Image | cut -d "/" -f 2
amazon-k8s-cni-init:v1.12.2
amazon-k8s-cni:v1.12.2
# λ
Έλ IP νμΈ
aws ec2 describe-instances --query "Reservations[*].Instances[*].{PublicIPAdd:PublicIpAddress,PrivateIPAdd:PrivateIpAddress,InstanceName:Tags[?Key=='Name']|[0].Value,Status:State.Name}" --filters Name=instance-state-name,Values=running --output table
(sparkandassociates:N/A) [root@kops-ec2 ~]# aws ec2 describe-instances --query "Reservations[*].Instances[*].{PublicIPAdd:PublicIpAddress,PrivateIPAdd:PrivateIpAddress,InstanceName:Tags[?Key=='Name']|[0].Value,Status:State.Name}" --filters Name=instance-state-name,Values=running --output table | grep -v 172.31.
----------------------------------------------------------------------------------------------------------------
| DescribeInstances |
+---------------------------------------------------------------+----------------+-----------------+-----------+
| InstanceName | PrivateIPAdd | PublicIPAdd | Status |
+---------------------------------------------------------------+----------------+-----------------+-----------+
| nodes-ap-northeast-2c.sparkandassociates.net | 172.30.66.26 | 15.164.221.63 | running |
| kops-ec2 | 10.0.0.10 | 13.124.35.72 | running |
| control-plane-ap-northeast-2a.masters.sparkandassociates.net | 172.30.63.222 | 13.125.181.109 | running |
| nodes-ap-northeast-2a.sparkandassociates.net | 172.30.32.225 | 3.35.141.26 | running |
+---------------------------------------------------------------+----------------+-----------------+-----------+
# νλ IP νμΈ
kubectl get pod -n kube-system -o=custom-columns=NAME:.metadata.name,IP:.status.podIP,STATUS:.status.phase
# νλ μ΄λ¦ νμΈ
kubectl get pod -A -o name
# νλ κ°―μ νμΈ
kubectl get pod -A -o name | wc -l
kubectl ktop # νλ μ 보 μΆλ ₯μλ λ€μ μκ° νμ
master node μ μ ν νμΈ
# ν΄ μ€μΉ
sudo apt install -y tree jq net-tools
# CNI μ 보 νμΈ
ls /var/log/aws-routed-eni
cat /var/log/aws-routed-eni/plugin.log | jq
cat /var/log/aws-routed-eni/ipamd.log | jq
# λ€νΈμν¬ μ 보 νμΈ : eniYλ pod network λ€μμ€νμ΄μ€μ veth pair
ip -br -c addr
ip -c addr
ip -c route
sudo iptables -t nat -S
sudo iptables -t nat -L -n -v
ubuntu@i-08a4f488be204357c:~$ ip -br -c addr
lo UNKNOWN 127.0.0.1/8 ::1/128
ens5 UP 172.30.63.222/19 fe80::d1:a1ff:febf:2b56/64
nodelocaldns DOWN 169.254.20.10/32
eni6d8cdfa2db1@if3 UP fe80::70f8:9fff:fe74:618d/64
eni95ee4851614@if3 UP fe80::e0fe:b9ff:fe1f:f5aa/64
enib6e94747ace@if3 UP fe80::5017:fdff:fe5a:24f6/64
enibafb7cbc19f@if3 UP fe80::b45f:96ff:febf:bd42/64
ubuntu@i-08a4f488be204357c:~$ ip -c route
default via 172.30.32.1 dev ens5 proto dhcp src 172.30.63.222 metric 100
172.30.32.0/19 dev ens5 proto kernel scope link src 172.30.63.222
172.30.32.1 dev ens5 proto dhcp scope link src 172.30.63.222 metric 100
172.30.56.192 dev eni6d8cdfa2db1 scope link
172.30.56.193 dev eni95ee4851614 scope link
172.30.56.194 dev enib6e94747ace scope link
172.30.56.195 dev enibafb7cbc19f scope link
μμ»€λ Έλλ λ§μ°¬κ°μ§λ‘ μ μν΄μ νμΈν΄λ³΄μ.
# μ컀 λ
Έλ Public IP νμΈ
aws ec2 describe-instances --query "Reservations[*].Instances[*].{PublicIPAdd:PublicIpAddress,InstanceName:Tags[?Key=='Name']|[0].Value}" --filters Name=instance-state-name,Values=running --output table
# μ컀 λ
Έλ Public IP λ³μ μ§μ
W1PIP=15.164.221.63
W2PIP=3.35.141.26
# [μ컀 λ
Έλ1~2] SSH μ μ : μ μ ν μλ ν΄ μ€μΉ λ± μ 보 κ°κ° νμΈ
ssh -i ~/.ssh/id_rsa ubuntu@$W1PIP
ssh -i ~/.ssh/id_rsa ubuntu@$W2PIP
--------------------------------------------------
# ν΄ μ€μΉ
sudo apt install -y tree jq net-tools
# CNI μ 보 νμΈ
ls /var/log/aws-routed-eni
cat /var/log/aws-routed-eni/plugin.log | jq
cat /var/log/aws-routed-eni/ipamd.log | jq
# λ€νΈμν¬ μ 보 νμΈ
ip -br -c addr
ip -c addr
ip -c route
sudo iptables -t nat -S
sudo iptables -t nat -L -n -v
(sparkandassociates:N/A) [root@kops-ec2 ~]# kubectl get pod -n kube-system -l app=ebs-csi-node -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ebs-csi-node-7gd5l 3/3 Running 0 19h 172.30.56.192 i-08a4f488be204357c <none> <none>
ebs-csi-node-jghkn 3/3 Running 0 19h 172.30.94.160 i-05538a0cedc2ceac8 <none> <none>
ebs-csi-node-rz2x7 3/3 Running 0 19h 172.30.60.112 i-0f94a3e2d9abe939f <none> <none>
(sparkandassociates:N/A) [root@kops-ec2 ~]#
ubuntu@i-05538a0cedc2ceac8:~$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.30.64.1 0.0.0.0 UG 100 0 0 ens5
172.30.64.0 0.0.0.0 255.255.224.0 U 0 0 0 ens5
172.30.64.1 0.0.0.0 255.255.255.255 UH 100 0 0 ens5
172.30.88.64 0.0.0.0 255.255.255.255 UH 0 0 0 eniff5a9530b8d
172.30.94.160 0.0.0.0 255.255.255.255 UH 0 0 0 enif8808f94e33
172.30.94.161 0.0.0.0 255.255.255.255 UH 0 0 0 eni4122c9c8c4f
172.30.94.162 0.0.0.0 255.255.255.255 UH 0 0 0 eni942832c30bf
172.30.94.163 0.0.0.0 255.255.255.255 UH 0 0 0 eni3e424b82595
( μΆμ² : https://github.com/aws/amazon-vpc-cni-k8s/blob/master/docs/cni-proposal.md)
VPC CNI λ VPC λ΄λΆμμ POD λ€νΈμν¬ λμκ³Ό μ컀 λ
Έλ λμμ΄ κ°μΌλ―λ‘ λ³λ μ€λ²λ μ΄ ν΅μ μμ΄ μ§μ ν΅μ κ°λ₯νλ€.
λμΌλ‘ νμΈν΄λ³΄μ. ν μ€νΈ νλ μμ± - nicolaka/netshoot
# [ν°λ―Έλ1~2] μ컀 λ
Έλ 1~2 λͺ¨λν°λ§
ssh -i ~/.ssh/id_rsa ubuntu@$W1PIP
watch -d "ip link | egrep 'ens5|eni' ;echo;echo "[ROUTE TABLE]"; route -n | grep eni"
ssh -i ~/.ssh/id_rsa ubuntu@$W2PIP
watch -d "ip link | egrep 'ens5|eni' ;echo;echo "[ROUTE TABLE]"; route -n | grep eni"
# ν
μ€νΈμ© νλ netshoot-pod μμ±
cat <<EOF | kubectl create -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: netshoot-pod
spec:
replicas: 2
selector:
matchLabels:
app: netshoot-pod
template:
metadata:
labels:
app: netshoot-pod
spec:
containers:
- name: netshoot-pod
image: nicolaka/netshoot
command: ["tail"]
args: ["-f", "/dev/null"]
terminationGracePeriodSeconds: 0
EOF
# νλ μ΄λ¦ λ³μ μ§μ
PODNAME1=$(kubectl get pod -l app=netshoot-pod -o jsonpath={.items[0].metadata.name})
PODNAME2=$(kubectl get pod -l app=netshoot-pod -o jsonpath={.items[1].metadata.name})
# νλ νμΈ
kubectl get pod -o wide
kubectl get pod -o=custom-columns=NAME:.metadata.name,IP:.status.podIP
νλκ° μμ±λλ©΄ eniY@ifN μΆκ°λκ³ λΌμ°ν
ν
μ΄λΈμ μ λ³΄κ° μΆκ°λλ€.
MTUλ μ ν¬νλ μ 9001λ‘ μ€μ λλ€.
AWS MTU νμΈ
awseniY@ifN μΈν°νμ΄μ€ μΆκ°λκ²μ νμΈκ°λ₯νλ€.
aws-routed-eni λ‘κ·Έλ₯Ό μ°λ νλ‘μΈμ€λ₯Ό μ°Ύμ보μ. ν΄λΉ νλ‘μΈμ€μμ cni μΈν°νμ΄μ€λ₯Ό μμ±νκ³ , λΌμ°ν ν μ΄λΈ μ λ°μ΄νΈ ν¨.
μΌλ¨ k8s cluster λ΄μ aws κ΄λ ¨ pod λ€μ΄ μμΌλ©° aws-node-* νλλ₯Ό νλ² μ΄ν΄λ³΄μ.
(sparkandassociates:default) [root@kops-ec2 ~]# k get pod -A -o wide | grep aws
kube-system aws-cloud-controller-manager-hr2qz 1/1 Running 0 27h 172.30.63.222 i-08a4f488be204357c <none> <none>
kube-system aws-load-balancer-controller-55bd49cfc7-5kq7q 1/1 Running 0 27h 172.30.63.222 i-08a4f488be204357c <none> <none>
kube-system aws-node-9d9h7 1/1 Running 0 27h 172.30.63.222 i-08a4f488be204357c <none> <none>
kube-system aws-node-lv8gm 1/1 Running 0 27h 172.30.66.26 i-05538a0cedc2ceac8 <none> <none>
kube-system aws-node-r5g6c 1/1 Running 0 27h 172.30.32.225 i-0f94a3e2d9abe939f <none> <none>
(sparkandassociates:default) [root@kops-ec2 ~]# kubectl get ds -n kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
aws-cloud-controller-manager 1 1 1 1 1 <none> 2d
aws-node 3 3 3 3 3 <none> 2d
ebs-csi-node 3 3 3 3 3 kubernetes.io/os=linux 2d
kops-controller 1 1 1 1 1 <none> 2d
node-local-dns 3 3 3 3 3 <none> 2d
aws-node daemonset νμΈν΄λ³΄λ©΄ μ°Ύμ μ μλ€.
Liveness, Readiness λ‘ μ¬μ©νλ grpc-health-probe μ½λ νμΈ
https://github.com/aws/amazon-vpc-cni-k8s/blob/master/cmd/grpc-health-probe/main.go
aws-vpc-cni μ½λ νμΈ
https://github.com/aws/amazon-vpc-cni-k8s/blob/master/cmd/aws-vpc-cni/main.go
eniconfigs.crd.k8s.amazonaws.com CRD (CustomResourceDefinition)μ μν΄ vpc eni 컨νΈλ‘€ λλλ―.
VPC CNI κ΄λ ¨ μ€λͺ
https://aws.github.io/aws-eks-best-practices/networking/vpc-cni/
μλ¬΄νΌ kopsλ‘ λ°°ν¬λ k8s ν΄λ¬μ€ν°κ° μ΄λ€ νΉμ±μ κ°μ‘λμ§ λλ΅ νμΈνλ€.
# νλ IP λ³μ μ§μ
PODIP1=$(kubectl get pod -l app=netshoot-pod -o jsonpath={.items[0].status.podIP})
PODIP2=$(kubectl get pod -l app=netshoot-pod -o jsonpath={.items[1].status.podIP})
PODNAME1=$(kubectl get pod -l app=netshoot-pod -o jsonpath={.items[0].metadata.name})
PODNAME2=$(kubectl get pod -l app=netshoot-pod -o jsonpath={.items[1].metadata.name})
# νλ1 Shell μμ νλ2λ‘ ping ν
μ€νΈ
kubectl exec -it $PODNAME1 -- ping -c 2 $PODIP2
# νλ2 Shell μμ νλ1λ‘ ping ν
μ€νΈ
kubectl exec -it $PODNAME2 -- ping -c 2 $PODIP1
(sparkandassociates:default) [root@kops-ec2 ~]# kubectl exec -it $PODNAME1 -- ping -c 2 $PODIP2
PING 172.30.94.164 (172.30.94.164) 56(84) bytes of data.
64 bytes from 172.30.94.164: icmp_seq=1 ttl=62 time=1.12 ms
64 bytes from 172.30.94.164: icmp_seq=2 ttl=62 time=1.04 ms
--- 172.30.94.164 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 1.035/1.076/1.117/0.041 ms
(sparkandassociates:default) [root@kops-ec2 ~]# ^C
(sparkandassociates:default) [root@kops-ec2 ~]# kubectl exec -it $PODNAME2 -- ping -c 2 $PODIP1
PING 172.30.60.114 (172.30.60.114) 56(84) bytes of data.
64 bytes from 172.30.60.114: icmp_seq=1 ttl=62 time=1.03 ms
64 bytes from 172.30.60.114: icmp_seq=2 ttl=62 time=1.02 ms
# μ컀 λ
Έλ EC2 : TCPDUMP νμΈ
sudo tcpdump -i any -nn icmp
sudo tcpdump -i ens5 -nn icmp
ν΅μ νλ¦ : iptables SNAT ν΅ν΄ λ
Έλ μΈν°νμ΄μ€ IPλ‘ λ³κ²½λμ ν΅μ
(μ°Έκ³ : https://github.com/aws/amazon-vpc-cni-k8s/blob/master/docs/cni-proposal.md )
# μμ
μ© EC2 : pod-1 Shell μμ μΈλΆλ‘ ping
kubectl exec -it $PODNAME1 -- ping -c 1 www.google.com
kubectl exec -it $PODNAME1 -- ping -i 0.1 www.google.com
# μ컀 λ
Έλ EC2 : νΌλΈλ¦IP νμΈ, TCPDUMP νμΈ
curl -s ipinfo.io/ip ; echo
sudo tcpdump -i any -nn icmp
sudo tcpdump -i ens5 -nn icmp
(sparkandassociates:default) [root@kops-ec2 ~]# kubectl exec -it $PODNAME2 -- ping -c 1 www.google.com
PING www.google.com (142.250.196.100) 56(84) bytes of data.
64 bytes from nrt12s35-in-f4.1e100.net (142.250.196.100): icmp_seq=1 ttl=45 time=26.4 ms
--- www.google.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 26.374/26.374/26.374/0.000 ms
(sparkandassociates:default) [root@kops-ec2 ~]#
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
root@i-05538a0cedc2ceac8:~#
root@i-05538a0cedc2ceac8:~#
root@i-05538a0cedc2ceac8:~# sudo tcpdump -i any -nn icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked v1), capture size 262144 bytes
06:35:55.774315 IP 172.30.94.164 > 142.250.196.100: ICMP echo request, id 63106, seq 1, length 64
06:35:55.774339 IP 172.30.66.26 > 142.250.196.100: ICMP echo request, id 21114, seq 1, length 64
06:35:55.800666 IP 142.250.196.100 > 172.30.66.26: ICMP echo reply, id 21114, seq 1, length 64
06:35:55.800682 IP 142.250.196.100 > 172.30.94.164: ICMP echo reply, id 63106, seq 1, length 64
# podIP(172.30.94.164) μμ node IP(172.30.66.26)λ‘ SNAT μ΄ λμ΄ ν΅μ λ¨.
# νλκ° μΈλΆμ ν΅μ μμλ μλ μ²λΌ 'AWS-SNAT-CHAIN-0, AWS-SNAT-CHAIN-1' λ£°(rule)μ μν΄μ SNAT λμ΄μ μΈλΆμ ν΅μ !
root@i-05538a0cedc2ceac8:~# sudo iptables -t nat -S | grep 'A AWS-SNAT-CHAIN'
-A AWS-SNAT-CHAIN-0 ! -d 172.30.0.0/16 -m comment --comment "AWS SNAT CHAIN" -j AWS-SNAT-CHAIN-1
-A AWS-SNAT-CHAIN-1 ! -o vlan+ -m comment --comment "AWS, SNAT" -m addrtype ! --dst-type LOCAL -j SNAT --to-source 172.30.66.26 --random-fully
root@i-05538a0cedc2ceac8:~#
# μ°Έκ³ λ‘ λ€ IPλ eth0(ENI 첫λ²μ§Έ)μ IP μ£Όμμ΄λ€
# --random-fully λμ - λ§ν¬1 λ§ν¬2
sudo iptables -t nat -S | grep 'A AWS-SNAT-CHAIN'
-A AWS-SNAT-CHAIN-0 ! -d 172.30.0.0/16 -m comment --comment "AWS SNAT CHAIN" -j AWS-SNAT-CHAIN-1
-A AWS-SNAT-CHAIN-1 ! -o vlan+ -m comment --comment "AWS, SNAT" -m addrtype ! --dst-type LOCAL -j SNAT --to-source 172.30.85.242 --random-fully
## μλ 'mark 0x4000/0x4000' 맀μΉλμ§ μμμ RETURN λ¨!
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE --random-fully
...
# μΉ΄μ΄νΈ νμΈ
Every 2.0s: sudo iptables -v --numeric --table nat --list AWS-SNAT-CHAIN-0; echo ; sudo iptables -v --numeric --table nat --list ... i-05538a0cedc2ceac8: Fri Mar 17 06:40:43 2023
Chain AWS-SNAT-CHAIN-0 (1 references)
pkts bytes target prot opt in out source destination
264K 16M AWS-SNAT-CHAIN-1 all -- * * 0.0.0.0/0 !172.30.0.0/16 /* AWS SNAT CHAIN */
Chain AWS-SNAT-CHAIN-1 (1 references)
pkts bytes target prot opt in out source destination
33030 1983K SNAT all -- * !vlan+ 0.0.0.0/0 0.0.0.0/0 /* AWS, SNAT */ ADDRTYPE match dst-type !LOCAL to:172.30.66.26 random-fully
Chain KUBE-POSTROUTING (1 references)
pkts bytes target prot opt in out source destination
2973 185K RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 mark match ! 0x4000/0x4000
0 0 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 MARK xor 0x4000
0 0 MASQUERADE all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service traffic requiring SNAT */ random-fully
# conntrack νμΈ
sudo conntrack -L -n |grep -v '169.254.169'
conntrack v1.4.5 (conntrack-tools): 24 flow entries have been shown.
icmp 1 24 src=172.30.94.164 dst=142.251.42.132 type=8 code=0 id=33110 src=142.251.42.132 dst=172.30.66.26 type=0 code=0 id=29865 mark=128 use=1
tcp 6 102 TIME_WAIT src=172.30.94.164 dst=142.251.42.132 sport=60882 dport=80 src=142.251.42.132 dst=172.30.66.26 sport=80 dport=18054 [ASSURED] mark=128 use=1
# t3 νμ
μ μ 보(νν°) νμΈ
[root@san-1 yaml]# aws ec2 describe-instance-types --filters Name=instance-type,Values=t3.* \
> --query "InstanceTypes[].{Type: InstanceType, MaxENI: NetworkInfo.MaximumNetworkInterfaces, IPv4addr: NetworkInfo.Ipv4AddressesPerInterface}" \
> --output table
--------------------------------------
| DescribeInstanceTypes |
+----------+----------+--------------+
| IPv4addr | MaxENI | Type |
+----------+----------+--------------+
| 15 | 4 | t3.2xlarge |
| 15 | 4 | t3.xlarge |
| 12 | 3 | t3.large |
| 6 | 3 | t3.medium |
| 2 | 2 | t3.nano |
| 2 | 2 | t3.micro |
| 4 | 3 | t3.small |
+----------+----------+--------------+
# νλ μ¬μ© κ°λ₯ κ³μ° μμ : aws-node μ kube-proxy νλλ host-networking μ¬μ©μΌλ‘ IP 2κ° λ¨μ
((MaxENI * (IPv4addr-1)) + 2)
t3.medium κ²½μ° : ((3 * (6 - 1) + 2 ) = 17κ° >> aws-node μ kube-proxy 2κ° μ μΈνλ©΄ 15κ°
νμ§λ§, IP prefix delegation μ€μ μ μ΄λ―Έ ν΄λκ±°λΌ μ컀λ
ΈλλΉ 100κ° podμ΄ μμ± λ μ μλλ‘ λμ΄μλ€.
(IPv4 Prefix Delegation : IPv4 28bit μλΈλ·(prefix)λ₯Ό μμνμ¬ ν λΉ κ°λ₯ IP μμ μΈμ€ν΄μ€ μ νμ κΆμ₯νλ μ΅λ κ°―μλ‘ μ μ )
(sparkandassociates:default) [root@kops-ec2 pkos]# kubectl describe daemonsets.apps -n kube-system aws-node | egrep 'ENABLE_PREFIX_DELEGATION|WARM_PREFIX_TARGET'
ENABLE_PREFIX_DELEGATION: true
WARM_PREFIX_TARGET: 1
(sparkandassociates:default) [root@kops-ec2 pkos]# kubectl describe node | grep Allocatable: -A6
Allocatable:
cpu: 2
ephemeral-storage: 119703055367
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 3670004Ki
pods: 100
--
Allocatable:
cpu: 2
ephemeral-storage: 59763732382
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 3854320Ki
pods: 100
--
Allocatable:
cpu: 2
ephemeral-storage: 119703055367
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 3698676Ki
pods: 100
# pod 200κ° μμ±
(sparkandassociates:default) [root@kops-ec2 ~]# kubectl apply -f ~/pkos/2/nginx-dp.yaml
deployment.apps/nginx-deployment created
(sparkandassociates:default) [root@kops-ec2 ~]# kubectl scale deployment nginx-deployment --replicas=200
deployment.apps/nginx-deployment scaled
(sparkandassociates:default) [root@kops-ec2 pkos]# k get pod | grep Pend| wc -l
13
μ컀 λ
Έλ κ° 100κ°μ© podμ΄ κ°λ μ°¨ μκ³ , 13κ° podμ Pending μνμ. (* μ컀λ
Έλμ podμ΄ μ΄λ―Έ μμ΄μ μ΅λ 200κ° λλ¬ν¨)
maxPod μ€μ νμΈ
(sparkandassociates:default) [root@kops-ec2 pkos]# kops edit cluster
...
kubelet:
anonymousAuth: false
maxPods: 100
...
μΈμ€ν΄μ€ νμ νμΈ
(sparkandassociates:default) [root@kops-ec2 pkos]# kubectl describe nodes | grep "node.kubernetes.io/instance-type"
node.kubernetes.io/instance-type=c5d.large
node.kubernetes.io/instance-type=t3.medium
node.kubernetes.io/instance-type=c5d.large
# Nitro μΈμ€ν΄μ€ μ ν νμΈ
aws ec2 describe-instance-types --filters Name=hypervisor,Values=nitro --query "InstanceTypes[*].[InstanceType]" --output text | sort | egrep 't3\.|c5\.|c5d\.'
Service LoadBalancer Controller : AWS Load Balancer ControllerΒ +Β NLBΒ IP λͺ¨λΒ λμ with AWS VPC CNI
(μΆμ²: κ°μλ€λ μ€ν°λ λͺ¨μ μλ£)
EC2 instance profiles μ€μ λ° AWS LoadBalancer λ°°ν¬ & ExternalDNS μ€μΉ λ° λ°°ν¬ (μ΄λ―Έ μ€μ λμ΄ μμΌλ―λ‘ μ°Έκ³ λ§)
# λ§μ€ν°/μ컀 λ
Έλμ EC2 IAM Role μ Policy (AWSLoadBalancerControllerIAMPolicy) μΆκ°
## IAM Policy μ μ±
μμ± : 2μ£Όμ°¨μμ IAM Policy λ₯Ό 미리 λ§λ€μ΄λμμΌλ Skip
curl -o iam_policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.4.5/docs/install/iam_policy.json
aws iam create-policy --policy-name AWSLoadBalancerControllerIAMPolicy --policy-document file://iam_policy.json
# EC2 instance profiles μ IAM Policy μΆκ°(attach)
aws iam attach-role-policy --policy-arn arn:aws:iam::$ACCOUNT_ID:policy/AWSLoadBalancerControllerIAMPolicy --role-name masters.$KOPS_CLUSTER_NAME
aws iam attach-role-policy --policy-arn arn:aws:iam::$ACCOUNT_ID:policy/AWSLoadBalancerControllerIAMPolicy --role-name nodes.$KOPS_CLUSTER_NAME
# IAM Policy μ μ±
μμ± : 2μ£Όμ°¨μμ IAM Policy λ₯Ό 미리 λ§λ€μ΄λμμΌλ Skip
curl -s -O https://s3.ap-northeast-2.amazonaws.com/cloudformation.cloudneta.net/AKOS/externaldns/externaldns-aws-r53-policy.json
aws iam create-policy --policy-name AllowExternalDNSUpdates --policy-document file://externaldns-aws-r53-policy.json
# EC2 instance profiles μ IAM Policy μΆκ°(attach)
aws iam attach-role-policy --policy-arn arn:aws:iam::$ACCOUNT_ID:policy/AllowExternalDNSUpdates --role-name masters.$KOPS_CLUSTER_NAME
aws iam attach-role-policy --policy-arn arn:aws:iam::$ACCOUNT_ID:policy/AllowExternalDNSUpdates --role-name nodes.$KOPS_CLUSTER_NAME
# kOps ν΄λ¬μ€ν° νΈμ§ : μλ λ΄μ© μΆκ°
kops edit cluster
-----
spec:
certManager:
enabled: true
awsLoadBalancerController:
enabled: true
externalDns:
provider: external-dns
-----
# μ
λ°μ΄νΈ μ μ©
kops update cluster --yes && echo && sleep 3 && kops rolling-update cluster
ingress1.yaml λ΄μ© νμΈ.
"2048" κ²μ λ° ingress λ₯Ό μμ±νλ€.
apiVersion: v1
kind: Namespace
metadata:
name: game-2048
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: game-2048
name: deployment-2048
spec:
selector:
matchLabels:
app.kubernetes.io/name: app-2048
replicas: 2
template:
metadata:
labels:
app.kubernetes.io/name: app-2048
spec:
containers:
- image: public.ecr.aws/l6m2t8p7/docker-2048:latest
imagePullPolicy: Always
name: app-2048
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
namespace: game-2048
name: service-2048
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app.kubernetes.io/name: app-2048
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: game-2048
name: ingress-2048
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service-2048
port:
number: 80
# μμ± νμΈ
kubectl get-all -n game-2048
kubectl get ingress,svc,ep,pod -n game-2048
(sparkandassociates:default) [root@kops-ec2 pkos]# kubectl get targetgroupbindings -n game-2048
NAME SERVICE-NAME SERVICE-PORT TARGET-TYPE AGE
k8s-game2048-service2-c9310624f4 service-2048 80 ip 49s
# Ingress νμΈ
kubectl describe ingress -n game-2048 ingress-2048
(sparkandassociates:default) [root@kops-ec2 pkos]# kubectl describe ingress -n game-2048 ingress-2048
Name: ingress-2048
Labels: <none>
Namespace: game-2048
Address: k8s-game2048-ingress2-56100cdd1f-2017222223.ap-northeast-2.elb.amazonaws.com
Ingress Class: alb
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
*
/ service-2048:80 (172.30.61.112:80,172.30.93.48:80)
Annotations: alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfullyReconciled 67s ingress Successfully reconciled
# κ²μ μ μ : ALB μ£Όμλ‘ μΉ μ μ
kubectl get ingress -n game-2048 ingress-2048 -o jsonpath={.status.loadBalancer.ingress[0].hostname} | awk '{ print "Game URL = http://"$1 }'
kubectl get logs -n game-2048 -l app=game-2048
(sparkandassociates:default) [root@kops-ec2 pkos]# kubectl get ingress -n game-2048 ingress-2048 -o jsonpath={.status.loadBalancer.ingress[0].hostname} | awk '{ print "Game URL = http://"$1 }'
Game URL = http://k8s-game2048-ingress2-56100cdd1f-2017222223.ap-northeast-2.elb.amazonaws.com
# νλ IP νμΈ
(sparkandassociates:default) [root@kops-ec2 pkos]# kubectl get pod -n game-2048 -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-2048-6bc9fd6bf5-9q9rj 1/1 Running 0 3m24s 172.30.61.112 i-0f94a3e2d9abe939f <none> <none>
deployment-2048-6bc9fd6bf5-q79sp 1/1 Running 0 3m24s 172.30.93.48 i-05538a0cedc2ceac8 <none> <none>
EC2 > Target Group μμ νμΈκ°λ₯νλ€.
(sparkandassociates:default) [root@kops-ec2 pkos]# kubectl scale deployment -n game-2048 deployment-2048 --replicas 3
deployment.apps/deployment-2048 scaled
pod κ°μλ₯Ό 3κ°λ‘ λ리면 μμ΄λ΄ ALBμ λ°μλ¨. (So cool...)
λ°λλ‘ podμ 1κ°λ‘ μ€μ΄λ©΄ κ·Έ μ¦μ νκ² κ·Έλ£Ήμμ draining λ¨. (soooooo cool)
# νμ¬ λ³λ DNS λ±λ‘μ΄ μλμ΄μμ΄ elb μ£Όμλ‘ μ€μ λ¨.
(sparkandassociates:default) [root@kops-ec2 pkos]# k -n game-2048 get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-2048 alb * k8s-game2048-ingress2-56100cdd1f-2017222223.ap-northeast-2.elb.amazonaws.com 80 11m
# host μΆκ°ν΄μ€λ€.
(sparkandassociates:default) [root@kops-ec2 pkos]# k -n game-2048 edit ingress ingress-2048
...
ingressClassName: alb
rules:
- host: albweb.sparkandassociates.net
http:
paths:
- backend:
...
μ€μ ν λλ©μΈμΌλ‘ μ μ νμΈ
ν νΈλ¦¬μ€, λ§λ¦¬μ€ κ²μ μΆκ° λ°°ν¬
cat <<EOF | kubectl create -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: tetris
labels:
app: tetris
spec:
replicas: 1
selector:
matchLabels:
app: tetris
template:
metadata:
labels:
app: tetris
spec:
containers:
- name: tetris
image: bsord/tetris
---
apiVersion: v1
kind: Service
metadata:
name: tetris
spec:
selector:
app: tetris
ports:
- port: 80
protocol: TCP
targetPort: 80
type: NodePort
EOF
cat <<EOF | kubectl create -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: mario
labels:
app: mario
spec:
replicas: 1
selector:
matchLabels:
app: mario
template:
metadata:
labels:
app: mario
spec:
containers:
- name: mario
image: pengbai/docker-supermario
---
apiVersion: v1
kind: Service
metadata:
name: mario
spec:
selector:
app: mario
ports:
- port: 80
protocol: TCP
targetPort: 8080
type: NodePort
externalTrafficPolicy: Local
EOF
Ingress μμ±
cat <<EOF | kubectl create -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-ps5
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
ingressClassName: alb
rules:
- host: albps5.sparkandassociates.net
http:
paths:
- path: /mario
pathType: Prefix
backend:
service:
name: mario
port:
number: 80
- path: /tetris
pathType: Prefix
backend:
service:
name: tetris
port:
number: 80
EOF
EC2 sec groupμ elb target group binding μ΄λ description λ¬λ¦°μ± 80-8080 λ£°μ΄ μΆκ°λλ€.
SSL μΈμ¦μ λ°κΈνκ³ , CNAME μΆκ°ν΄μ λλ©μΈ μΈμ¦ν μ¬μ©κ°λ₯.
ingressμ μΈμ¦μλ₯Ό μ€μ νλ λ°©λ²μ κ°λ¨νλ€.
μλμ κ°μ΄ cert arn μ λ£μ΄μ£ΌκΈ°λ§ νλ©΄λκ³
listen portλ₯Ό μ΄λ
Έν
μ΄μ
μ λ£μ΄μ£Όλ©΄ λλ€.
(μ°Έκ³ : https://guide.ncloud-docs.com/docs/k8s-k8suse-albingress)
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:ap-northeast-2:7842695:certificate/57533b9f3
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
μΈμ¦μκΉμ§ μ¬λ Έμ§λ§ μ¬μ ν
404μλ¬κ° λλ€.
tetris pod λ‘κ·Έ νμΈ.
/usr/share/nginx/html/tetris κ²½λ‘λ₯Ό μ°Ύλλ€?
/tetris λ₯Ό νΈμΆν κ²½μ° -> tetris endpoint 80 portμ / κ²½λ‘λ‘ κ°λλ‘ μλνμμΌλ tetris podμ /tetris κ²½λ‘λ‘ λ³΄λ΄μ§λ€.
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:ap-northeast-2:784246164695:certificate/57533136-de19-4485-b686-1f85e604b9f3
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
# custom annotations (redirects, header versioning) (if any):
alb.ingress.kubernetes.io/actions.viewer-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "Path":"/", "Query": "#{query}", "StatusCode": "HTTP_301"}}'
annotations:
alb.ingress.kubernetes.io/actions.mario: '{"Type":"redirect","RedirectConfig":{"Host":"mario.sparkandassociates.net","Port":"443","Protocol":"HTTPS","Query":"#{query}","path":"/","StatusCode":"HTTP_301"}}'
alb.ingress.kubernetes.io/actions.tetris: '{"Type":"redirect","RedirectConfig":{"Host":"tetris.sparkandassociates.net","Port":"443","Protocol":"HTTPS","Query":"#{query}","path":"/","StatusCode":"HTTP_301"}}'
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:ap-northeast-2:784246164695:certificate/57533136-de19-4485-b686-1f85e604b9f3
alb.ingress.kubernetes.io/conditions.mario: |
[{"field":"host-header","hostHeaderConfig":{"values":["mario.sparkandassociates.net"]}}]
alb.ingress.kubernetes.io/conditions.tetris: |
[{"field":"host-header","hostHeaderConfig":{"values":["tetris.sparkandassociates.net"]}}]
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
external-dns.alpha.kubernetes.io/hostname: mario.sparkandassociates.net,tetris.sparkandassociates.net
spec:
ingressClassName: alb
rules:
- host: albps5.sparkandassociates.net
http:
paths:
- backend:
service:
name: mario
port:
name: use-annotation
path: /mario
pathType: Prefix
- backend:
service:
name: tetris
port:
name: use-annotation
path: /tetris
pathType: Prefix
(*3μ 24μΌ μ λ°μ΄νΈ)
alb ingressλ L7 λ‘λλ°Έλ°μλ‘μ λͺ¨λ κΈ°λ₯μ μ 곡νμ§ μμ.
rewrite κΈ°λ₯μ΄ μμ΄ redirectλ‘ ν΄λ³΄λ €νμΌλ μ λλ‘ λμ§μμ λ°©λ²μ λ³κ²½. (νΉμ λκ° μμλ©΄ μ°λ½μ£ΌμΈμ.)
컨ν
μ΄λλ΄ web root κ²½λ‘λ₯Ό λ³κ²½ κ° /mario, /tetris
컨ν
μ΄λ μ΄λ―Έμ§ μ체λ₯Ό λ³κ²½νμ§ μκ³ pod κΈ°λμ΄ν lifecycle μ΄μ©ν΄ μ€ν¬λ¦½νΈ μ¬μ©νκΈ°λ‘..
(lifecycle μ°Έκ³ : https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/ )
mario deployment λ₯Ό μλμ κ°μ΄ μμ ν΄μ€λ€.
spec:
containers:
- image: pengbai/docker-supermario
imagePullPolicy: Always
name: mario
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "mkdir -p /usr/local/tomcat/webapps/mario && cp -R /usr/local/tomcat/webapps/ROOT/* /usr/local/tomcat/webapps/mario; mv /usr/local/tomcat/webapps/mario /usr/local/tomcat/webapps/ROOT/mario"]
tetris deploymentλ λ§μ°¬κ°μ§λ‘ μμ
spec:
containers:
- image: bsord/tetris
imagePullPolicy: Always
name: tetris
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "mkdir -p /usr/share/nginx/tetris && cp -R /usr/share/nginx/html/* /usr/share/nginx/tetris; mv /usr/share/nginx/tetris /usr/share/nginx/html/tetris"]
νμΈ
μ€μ΅μμ μμ
kOps ν΄λ¬μ€ν° μμ & AWS CloudFormation μ€ν μμ
kops delete cluster --yes && aws cloudformation delete-stack --stack-name mykops
DNSλ μ½λλ μμ κ° μλμΌλ‘ μλλ―λ‘
route53μμ externalDNSλ‘ λ§λ€μ΄μ§ DNS λ μ½λλ μ§μ μμ ν΄μ€λ€.