EKS upgrade 환경을 구축하기 위하여 Amazon EKS Workshop에서 실습을 진행하였습니다.
Amazone EKS Workshop 이란 Amazon EKS(Elastic Kubernetes Service)를 보다 쉽게 이해하고 활용할 수 있도록 마련된 실습형 학습 환경입니다.
해당 실습을 진행하기 위해 AEWS 스터디 최영락님 도움으로 AWS Upgrade Workshop 임시 계정을 생성 받아서 실습을 진행하였습니다.
실습 환경을 제공해주신 최영락님과 AEWS 운영진분들에게 감사하단 말씀을 전해드립니다.

x.y.z 형식을 따릅니다. 여기서 x는 메이저 버전, y는 마이너 버전, z는 패치 버전을 의미하며, 이는 Semantic Versioning 규칙에 기반합니다.
whoami
pwdexportaws s3 lscat ~/.bashrcaws eks describe-cluster --name $EKS_CLUSTER_NAME | jqeksctl get clustereksctl get nodegroup --cluster $CLUSTER_NAMEeksctl get fargateprofile --cluster $CLUSTER_NAMEeksctl get addon --cluster $CLUSTER_NAMEkubectl get node --label-columns=eks.amazonaws.com/capacityType,node.kubernetes.io/lifecycle,karpenter.sh/capacity-type,eks.amazonaws.com/compute-typekubectl get node -L eks.amazonaws.com/nodegroup,karpenter.sh/nodepoolkubectl get nodepools
kubectl get nodeclaimskubectl get node --label-columns=node.kubernetes.io/instance-type,kubernetes.io/arch,kubernetes.io/os,topology.kubernetes.io/zonekubectl get crdhelm list -Akubectl get applications -n argocdkubectl get pod -Akubectl get pdb -Akubectl get svc -n argocd argo-cd-argocd-serverkubectl get targetgroupbindings -n argocdkubectl get nodes -o custom-columns='NODE:.metadata.name,TAINTS:.spec.taints[*].key,VALUES:.spec.taints[*].value,EFFECTS:.spec.taints[*].effect'kubectl get nodes -o json | jq '.items[] | {name: .metadata.name, labels: .metadata.labels}'kubectl get sts -Akubectl get sckubectl get pv,pvc -Aaws eks list-access-entries --cluster-name $CLUSTER_NAMEeksctl get iamidentitymapping --cluster $CLUSTER_NAMEkubectl describe cm -n kube-system aws-autheksctl get iamserviceaccount --cluster $CLUSTER_NAME
kubectl describe sa -A | grep role-arn aws eks describe-cluster --name $CLUSTER_NAME | jq -r .cluster.endpoint | cut -d '/' -f 3helm repo add geek-cookbook https://geek-cookbook.github.io/charts/
helm repo update
helm install kube-ops-view geek-cookbook/kube-ops-view --version 1.2.2 --namespace kube-system
#
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-type: external
labels:
app.kubernetes.io/instance: kube-ops-view
app.kubernetes.io/name: kube-ops-view
name: kube-ops-view-nlb
namespace: kube-system
spec:
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
app.kubernetes.io/instance: kube-ops-view
app.kubernetes.io/name: kube-ops-view
EOF

# 설치
(
set -x; cd "$(mktemp -d)" &&
OS="$(uname | tr '[:upper:]' '[:lower:]')" &&
ARCH="$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')" &&
KREW="krew-${OS}_${ARCH}" &&
curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/latest/download/${KREW}.tar.gz" &&
tar zxvf "${KREW}.tar.gz" &&
./"${KREW}" install krew
)
# PATH
export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"
vi ~/.bashrc
-----------
export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"
-----------
# 플러그인 설치
kubectl krew install ctx ns df-pv get-all neat stern oomd whoami rbac-tool rolesum
kubectl krew list
#
kubectl df-pv
#
kubectl whoami --all

wget -O eks-node-viewer https://github.com/awslabs/eks-node-viewer/releases/download/v0.7.1/eks-node-viewer_Linux_x86_64
chmod +x eks-node-viewer
sudo mv -v eks-node-viewer /usr/local/bin
# 설치 확인
eks-node-viewer

curl -sS https://webinstall.dev/k9s | bash

ls -lrt terraform/ 
aws s3 lsterraform {
backend "s3" {
bucket = "${s3-url}"
region = "us-west-2"
key = "terraform.tfstate"
}
}terraform state list
terraform output 

샘플 애플리케이션은 고객이 카탈로그를 탐색하고 장바구니에 항목을 추가하며 결제 프로세스를 통해 주문을 완료할 수 있는 간단한 웹 스토어 애플리케이션
애플리케이션은 다음과 같이 구성되어 있다.

| Component | Description |
|---|---|
| UI | 프런트엔드 사용자 인터페이스를 제공하며, 다양한 다른 서비스에 대한 API 호출을 집계 |
| Catalog | 제품 목록 및 상세 정보를 제공하는 API |
| Cart | 고객의 쇼핑 카트 기능을 제공하는 API |
| Checkout | 결제 프로세스를 조정하는 API |
| Orders | 고객 주문을 수신하고 처리하는 API |
| Static assets | 제품 카탈로그와 관련된 이미지 등의 정적 자산을 제공하는 서비스 |
이미지 도커 파일, ECR 공개 저장소 정보 - Link
모든 구성요소는 ArgoCD를 통하여 EKS 클러스터에 배포 된다.
AWS CodeCommit 저장소를 GitOps repo로 사용하여 IDE에 복제할 수 있다.
cd ~/environment
git clone codecommit::${REGION}://eks-gitops-repo
export ARGOCD_SERVER=$(kubectl get svc argo-cd-argocd-server -n argocd -o json | jq --raw-output '.status.loadBalancer.ingress[0].hostname')
echo "ArgoCD URL: http://${ARGOCD_SERVER}"
export ARGOCD_USER="admin"
export ARGOCD_PWD=$(kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d)
echo "Username: ${ARGOCD_USER}"
echo "Password: ${ARGOCD_PWD}"

cat << EOF > ~/environment/eks-gitops-repo/apps/ui/service-nlb.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-type: external
labels:
app.kubernetes.io/instance: ui
app.kubernetes.io/name: ui
name: ui-nlb
namespace: ui
spec:
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
app.kubernetes.io/instance: ui
app.kubernetes.io/name: ui
EOF
cat << EOF > ~/environment/eks-gitops-repo/apps/ui/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: ui
resources:
- namespace.yaml
- configMap.yaml
- serviceAccount.yaml
- service.yaml
- deployment.yaml
- hpa.yaml
- service-nlb.yaml
EOF
#
cd ~/environment/eks-gitops-repo/
git add apps/ui/service-nlb.yaml apps/ui/kustomization.yaml
git commit -m "Add to ui nlb"
git push
argocd app sync ui
...
#
# UI 접속 URL 확인 (1.5, 1.3 배율)
kubectl get svc -n ui ui-nlb -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' | awk '{ print "UI URL = http://"$1""}'



출처 : https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html#kubernetes-release-calendar
출처 : https://aws.amazon.com/ko/blogs/containers/amazon-eks-extended-support-for-kubernetes-versions-pricing/
출처 : AEWS 스터디 3기
1.29면 노드도 1.29여야 합니다.aws eks describe-cluster-versions로 지원 기간을 확인하세요kubent(Kube No Trouble) 도구로 클러스터 내 deprecated API 사용 여부를 스캔Amazon EKS는 제어 평면 업그레이드는 자동화하지만, 업그레이드 시 영향을 받을 리소스나 애플리케이션 식별은 기존에 수동으로 수행되었습니다. 이를 위해 릴리스 노트를 검토해 더 이상 사용되지 않거나 제거된 Kubernetes API를 찾아내고, 이를 사용하는 애플리케이션을 수정해야 했습니다.
이 문제를 해결하기 위해 EKS Upgrade Insights 기능이 도입되었습니다. 이 기능은 다음과 같은 특징을 가집니다:
이렇게 Upgrade Insights를 활용하면 최신 Kubernetes 버전으로의 업그레이드 시 발생할 수 있는 문제를 사전에 파악하고 최소한의 노력으로 대응할 수 있습니다.
| 항목 | In-Place | Blue-Green |
|---|---|---|
| 롤백 | 불가능 | 즉시 가능 |
| 비용 | 낮음 | 높음 (병렬 클러스터) |
| 복잡성 | 낮음 (단일 클러스터) | 높음 (트래픽 전환/동기화) |
| 적합 케이스 | 단순 환경, 상태 저장 앱 | 대규모/중요 시스템, 다중 버전 건너뛰기 |
선택 기준:
여러가지 방법 (eksctl, AWS 관리 콘솔, AWS CLI 등)이 있으나 금일 실습에서는 Terraform를 이용하여 배포를 진행하고자 합니다.
cd ~/environment/terraform
terraform state list 
kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" | tr -s '[[:space:]]' '\n' | sort | uniq -c > 1.25.txtwhile true; do curl -s $UI_WEB; date; aws eks describe-cluster --name eksworkshop-eksctl | egrep 'version|endpoint"|issuer|platformVersion'; echo ; sleep 2; echo; done 
# 1.25 -> 1.26 버전으로 변경
variable "cluster_version" {
description = "EKS cluster version."
type = string
default = "1.26"
}
variable "mng_cluster_version" {
description = "EKS cluster mng version."
type = string
default = "1.26"
}
variable "ami_id" {
description = "EKS AMI ID for node groups"
type = string
default = ""
}terraform plan -no-color > plan-output.txt
# module.eks.module.eks_managed_node_group["initial"].aws_eks_node_group.this[0] will be updated in-place
~ resource "aws_eks_node_group" "this" {
id = ""
tags = {
"Blueprint" = "eksworkshop-eksctl"
"GithubRepo" = "github.com/aws-ia/terraform-aws-eks-blueprints"
"Name" = "initial"
"karpenter.sh/discovery" = "eksworkshop-eksctl"
}
~ version = "1.25" -> "1.26"
# (15 unchanged attributes hidden)
# (4 unchanged blocks hidden)
}terraform apply -auto-approve aws eks describe-cluster --name $EKS_CLUSTER_NAME | jq 
kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" | tr -s '[[:space:]]' '\n' | sort | uniq -c > 1.26.txt
diff 1.26.txt 1.25.txt 





kubectl get hpa -n ui ui -o yaml## 확인 결과
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"autoscaling/v2beta2","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"labels":{"argocd.argoproj.io/instance":"ui"},"name":"ui","namespace":"ui"},"spec":{"maxReplicas":4,"minReplicas":1,"scaleTargetRef":{"apiVersion":"apps/v1","kind":"Deployment","name":"ui"},"targetCPUUtilizationPercentage":80}}
creationTimestamp: "2025-03-30T05:08:11Z"
labels:
argocd.argoproj.io/instance: ui
name: ui
namespace: ui
resourceVersion: "1341799"
uid: 43c15519-3f51-40cc-b2b6-e66caeeaa39K8S Version 별로 호환 되는 버전이 있다.
해당 버전 확인은 아래 명령어를 통하여서 확인할수 있습니다.
단 VPC CNI, EBS CSI Driver는 현재 최신 버전으로 사용중이므로 변경 불필요
Core DNS , Kube-proxy 조회
aws eks describe-addon-versions --addon-name coredns --kubernetes-version 1.26 --output table \
--query "addons[].addonVersions[:10].{Version:addonVersion,DefaultVersion:compatibilities[0].defaultVersion}"
aws eks describe-addon-versions --addon-name kube-proxy --kubernetes-version 1.26 --output table \
--query "addons[].addonVersions[:10].{Version:addonVersion,DefaultVersion:compatibilities[0].defaultVersion}"
addons.tf수정
eks_addons = {
coredns = {
version = "v1.9.3-eksbuild.22" # Recommended version for EKS 1.26
}
kube_proxy = {
version = "v1.26.15-eksbuild.24" # Recommended version for EKS 1.26
}
}
code
terraform plan -no-color | tee addon.txt
확인 결과
# module.eks_blueprints_addons.aws_eks_addon.this["coredns"] will be updated in-place
~ resource "aws_eks_addon" "this" {
~ addon_version = "v1.8.7-eksbuild.10" -> "v1.9.3-eksbuild.22"
id = "eksworkshop-eksctl:coredns"
tags = {
"Blueprint" = "eksworkshop-eksctl"
"GithubRepo" = "github.com/aws-ia/terraform-aws-eks-blueprints"
}
# (11 unchanged attributes hidden)
# (1 unchanged block hidden)
}
# module.eks_blueprints_addons.aws_eks_addon.this["kube-proxy"] will be updated in-place
~ resource "aws_eks_addon" "this" {
~ addon_version = "v1.25.16-eksbuild.8" -> "v1.26.15-eksbuild.24"
id = "eksworkshop-eksctl:kube-proxy"
tags = {
"Blueprint" = "eksworkshop-eksctl"
"GithubRepo" = "github.com/aws-ia/terraform-aws-eks-blueprints"
}
# (11 unchanged attributes hidden)
# (1 unchanged block hidden)
}
terraform apply -auto-approvekubectl get pod -n kube-system -l 'k8s-app in (kube-dns, kube-proxy)'
kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" | tr -s '[[:space:]]' '\n' | sort | uniq -c 6 602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon-k8s-cni:v1.19.3-eksbuild.1
6 602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-network-policy-agent:v1.2.0-eksbuild.1
8 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/aws-ebs-csi-driver:v1.41.0
2 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/coredns:v1.9.3-eksbuild.22
2 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/csi-attacher:v4.8.1-eks-1-32-7
6 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/csi-node-driver-registrar:v2.13.0-eks-1-32-7
2 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/csi-provisioner:v5.2.0-eks-1-32-7
2 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/csi-resizer:v1.13.2-eks-1-32-7
2 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/csi-snapshotter:v8.2.1-eks-1-32-7
6 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/kube-proxy:v1.26.15-minimal-eksbuild.24
8 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/livenessprobe:v2.14.0-eks-1-32-7
8 amazon/aws-efs-csi-driver:v1.7.6
1 amazon/dynamodb-local:1.13.1
1 ghcr.io/dexidp/dex:v2.38.0
1 hjacobs/kube-ops-view:20.4.0
1 public.ecr.aws/aws-containers/retail-store-sample-assets:0.4.0
1 public.ecr.aws/aws-containers/retail-store-sample-cart:0.7.0
1 public.ecr.aws/aws-containers/retail-store-sample-catalog:0.4.0
1 public.ecr.aws/aws-containers/retail-store-sample-checkout:0.4.0
1 public.ecr.aws/aws-containers/retail-store-sample-orders:0.4.0
1 public.ecr.aws/aws-containers/retail-store-sample-ui:0.4.0
1 public.ecr.aws/bitnami/rabbitmq:3.11.1-debian-11-r0
2 public.ecr.aws/docker/library/mysql:8.0
1 public.ecr.aws/docker/library/redis:6.0-alpine
1 public.ecr.aws/docker/library/redis:7.0.15-alpine
2 public.ecr.aws/eks-distro/kubernetes-csi/external-provisioner:v3.6.3-eks-1-29-2
8 public.ecr.aws/eks-distro/kubernetes-csi/livenessprobe:v2.11.0-eks-1-29-2
6 public.ecr.aws/eks-distro/kubernetes-csi/node-driver-registrar:v2.9.3-eks-1-29-2
2 public.ecr.aws/eks/aws-load-balancer-controller:v2.7.1
2 public.ecr.aws/karpenter/controller:0.37.0@sha256:157f478f5db1fe999f5e2d27badcc742bf51cc470508b3cebe78224d0947674f
5 quay.io/argoproj/argocd:v2.10.0
1 registry.k8s.io/metrics-server/metrics-server:v0.7.0base.tf 에서 두 Managed 노드 그룹 구성 확인 eks_managed_node_group_defaults = {
cluster_version = var.mng_cluster_version
}
eks_managed_node_groups = {
initial = {
instance_types = ["m5.large", "m6a.large", "m6i.large"]
min_size = 2
max_size = 10
desired_size = 2
update_config = {
max_unavailable_percentage = 35
}
}
blue-mng={
instance_types = ["m5.large", "m6a.large", "m6i.large"]
cluster_version = "1.25"
min_size = 1
max_size = 2
desired_size = 1
update_config = {
max_unavailable_percentage = 35
}
labels = {
type = "OrdersMNG"
}
subnet_ids = [module.vpc.private_subnets[0]]
taints = [
{
key = "dedicated"
value = "OrdersApp"
effect = "NO_SCHEDULE"
}
]
}
}
aws ssm get-parameter --name /aws/service/eks/optimized-ami/1.25/amazon-linux-2/recommended/image_id \
--region $AWS_REGION --query "Parameter.Value" --output text
## 조회 결과
ami-xxxxvariable.tf 수정
variable "ami_id" {
description = "EKS AMI ID for node groups"
type = string
default = "ami-xxx" # 조회 된 ami 추가
}
base.tf 코드 추가custom = {
instance_types = ["t3.medium"]
min_size = 1
max_size = 2
desired_size = 1
update_config = {
max_unavailable_percentage = 35
}
ami_id = try(var.ami_id)
enable_bootstrap_user_data = true
}while true; do aws autoscaling describe-auto-scaling-groups --query 'AutoScalingGroups[*].AutoScalingGroupName' --output json | jq; echo ; kubectl get node -L eks.amazonaws.com/nodegroup; echo; date ; echo ; kubectl get node -L eks.amazonaws.com/nodegroup-image | grep ami; echo; sleep 1; echo; done
code
terraform apply -auto-approve
# 모니터링
while true; do aws autoscaling describe-auto-scaling-groups --query 'AutoScalingGroups[*].AutoScalingGroupName' --output json | jq; echo ; kubectl get node -L eks.amazonaws.com/nodegroup; echo; date ; echo ; kubectl get node -L eks.amazonaws.com/nodegroup-image | grep ami; echo; sleep 1; echo; done
조회 결과
## 확인 결과
ip-10-0-13-56.us-west-2.compute.internal Ready <none> 80m v1.25.16-eks-59bf375 initial-2025033004585862020000002a
ip-10-0-26-122.us-west-2.compute.internal Ready <none> 81m v1.26.15-eks-59bf375 initial-2025033004585862020000002a
initial 에 버전을 지정하지 않은 경우 variables.tf에 mng_cluster_version을 따라간다.(현재 mng_cluster_version는 1.25로 지정 되어 있다.)# base.tf
eks_managed_node_group_defaults = {
cluster_version = var.mng_cluster_version
}
eks_managed_node_groups = {
initial = {
instance_types = ["m5.large", "m6a.large", "m6i.large"]
min_size = 2
max_size = 10
desired_size = 2
update_config = {
max_unavailable_percentage = 35
}
}
# variable.tf
variable "mng_cluster_version" {
description = "EKS cluster mng version."
type = string
default = "1.25"
}aws ssm get-parameter --name /aws/service/eks/optimized-ami/1.26/amazon-linux-2/recommended/image_id \
--region $AWS_REGION --query "Parameter.Value" --output textvariable.tf 수정variable "mng_cluster_version" {
description = "EKS cluster mng version."
type = string
default = "1.26" # 1.25 -> 1.26 변경
}
# 다른 AMI으로 변경
variable "ami_id" {
description = "EKS AMI ID for node groups"
type = string
default = "ami-xxx" # 조회 된 ami 추가
}terraform 실행 : custom 또한 버전이 바뀌었음을 확인
code
terraform apply -auto-approve
# 모니터링
while true; do aws autoscaling describe-auto-scaling-groups --query 'AutoScalingGroups[*].AutoScalingGroupName' --output json | jq; echo ; kubectl get node -L eks.amazonaws.com/nodegroup; echo; date ; echo ; kubectl get node -L eks.amazonaws.com/nodegroup-image | grep ami; echo; sleep 1; echo; done
조회 결과
## 확인
ip-10-0-13-56.us-west-2.compute.internal Ready <none> 100m v1.26.15-eks-59bf375 initial-2025033004585862020000002a
ip-10-0-26-122.us-west-2.compute.internal Ready <none> 101m v1.26.15-eks-59bf375 initial-2025033004585862020000002a
NAME STATUS ROLES AGE VERSION NODEGROUP
ip-10-0-44-132.us-west-2.compute.internal Ready <none> 78s v1.26.15-eks-59bf375 custom-20250325154855579500000007
실습 목적을 위해 만든 custom은 제거하자
~~custom = {
instance_types = ["t3.medium"]
min_size = 1
max_size = 2
desired_size = 1
update_config = {
max_unavailable_percentage = 35
}
ami_id = try(var.ami_id)
enable_bootstrap_user_data = true
}
base.tf 수정
blue-mng={
instance_types = ["m5.large", "m6a.large", "m6i.large"]
cluster_version = "1.25"
min_size = 1
max_size = 2
desired_size = 1
update_config = {
max_unavailable_percentage = 35
}
labels = {
type = "OrdersMNG"
}
subnet_ids = [module.vpc.private_subnets[0]] # 해당 MNG은 프라이빗서브넷1 에서 동작(ebs pv 사용 중)
taints = [
{
key = "dedicated"
value = "OrdersApp"
effect = "NO_SCHEDULE"
}
]
}
terraform state 확인
code
terraform state show 'module.vpc.aws_subnet.private[0]'
terraform state show 'module.vpc.aws_subnet.private[1]'
terraform state show 'module.vpc.aws_subnet.private[2]'
조회 결과
## 0번 조회
availability_zone = "us-west-2a"
## 1번 조회
availability_zone = "us-west-2b"
## 2번 조회
availability_zone = "us-west-2c"
base.tf → blue-mng 밑에 green-mng 추가
green-mng={
instance_types = ["m5.large", "m6a.large", "m6i.large"]
subnet_ids = [module.vpc.private_subnets[0]]
min_size = 1
max_size = 2
desired_size = 1
update_config = {
max_unavailable_percentage = 35
}
labels = {
type = "OrdersMNG"
}
taints = [
{
key = "dedicated"
value = "OrdersApp"
effect = "NO_SCHEDULE"
}
]
}
terraform apply -auto-approve# 노드 조회
kubectl get node -l type=OrdersMNG -o wide
## 노드 조회 결과
ip-10-0-10-222.us-west-2.compute.internal Ready <none> 2d4h v1.25.16-eks-59bf375 10.0.10.222 <none> Amazon Linux 2 5.10.234-225.910.amzn2.x86_64 containerd://1.7.25
ip-10-0-8-175.us-west-2.compute.internal Ready <none> 104s v1.26.15-eks-59bf375 10.0.8.175 <none> Amazon Linux 2 5.10.234-225.910.amzn2.x86_64 containerd://1.7.25
# AZ 조회
kubectl get node -l type=OrdersMNG -L topology.kubernetes.io/zone
## AZ 조회 결과
ip-10-0-10-222.us-west-2.compute.internal Ready <none> 2d4h v1.25.16-eks-59bf375 us-west-2a
ip-10-0-8-175.us-west-2.compute.internal Ready <none> 2m3s v1.26.15-eks-59bf375 us-west-2a
# NoSchedule taint 조회
kubectl get nodes -l type=OrdersMNG -o jsonpath="{range .items[*]}{.metadata.name} {.spec.taints[?(@.effect=='NoSchedule')]}{\"\n\"}{end}"
## NoSchedule taint 조회 결과
ip-10-0-10-222.us-west-2.compute.internal {"effect":"NoSchedule","key":"dedicated","value":"OrdersApp"}
ip-10-0-8-175.us-west-2.compute.internal {"effect":"NoSchedule","key":"dedicated","value":"OrdersApp"}export BLUE_MNG=$(aws eks list-nodegroups --cluster-name eksworkshop-eksctl | jq -c .[] | jq -r 'to_entries[] | select( .value| test("blue-mng*")) | .value')
echo $BLUE_MNG
# 조회 결과
blue-mng-2025033004585862560000002c
export GREEN_MNG=$(aws eks list-nodegroups --cluster-name eksworkshop-eksctl | jq -c .[] | jq -r 'to_entries[] | select( .value| test("green-mng*")) | .value')
echo $GREEN_MNG
# 조회 결과
green-mng-20250401094211402300000007cd ~/environment/eks-gitops-repo/
sed -i 's/replicas: 1/replicas: 2/' apps/orders/deployment.yaml
git add apps/orders/deployment.yaml
git commit -m "Increase orders replicas 2"
git pushargocd app sync ordersbase.tf 수정 → 기존 blue-mng 제거 ~~blue-mng={
instance_types = ["m5.large", "m6a.large", "m6i.large"]
cluster_version = "1.25"
min_size = 1
max_size = 2
desired_size = 1
update_config = {
max_unavailable_percentage = 35
}
labels = {
type = "OrdersMNG"
}
subnet_ids = [module.vpc.private_subnets[0]]
taints = [
{
key = "dedicated"
value = "OrdersApp"
effect = "NO_SCHEDULE"
}
]
}cd ~/environment/terraform/
terraform plan && terraform apply -auto-approve NAME STATUS ROLES AGE VERSION
ip-10-0-10-222.us-west-2.compute.internal Ready,SchedulingDisabled <none> 2d5h v1.25.16-eks-59bf375
ip-10-0-8-175.us-west-2.compute.internal Ready <none> 35m v1.26.15-eks-59bf375
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
orders-5b97745747-85pzw 0/1 Running 0 4s 10.0.7.212 ip-10-0-8-175.us-west-2.compute.internal <none> <none>
orders-5b97745747-czxl6 1/1 Running 0 6m 10.0.6.149 ip-10-0-8-175.us-west-2.compute.internal <none> <none>
NAME READY UP-TO-DATE AVAILABLE AGE
orders 1/2 2 1 2d5hTue Apr 1 10:19:35 UTC 2025
NAME STATUS ROLES AGE VERSION
ip-10-0-10-222.us-west-2.compute.internal Ready,SchedulingDisabled <none> 2d5h v1.25.16-eks-59bf375
ip-10-0-8-175.us-west-2.compute.internal Ready <none> 36m v1.26.15-eks-59bf375
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
orders-5b97745747-85pzw 0/1 Running 2 (38s ago) 59s 10.0.7.212 ip-10-0-8-175.us-west-2.compute.internal <none> <none>
orders-5b97745747-czxl6 1/1 Running 0 6m55s 10.0.6.149 ip-10-0-8-175.us-west-2.compute.internal <none> <none>
NAME READY UP-TO-DATE AVAILABLE AGE
orders 1/2 2 1 2d5h
ip-10-0-8-175.us-west-2.compute.internal Ready <none> 37m v1.26.15-eks-59bf375
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
orders-5b97745747-85pzw 1/1 Running 2 (101s ago) 2m2s 10.0.7.212 ip-10-0-8-175.us-west-2.compute.internal <none> <none>
orders-5b97745747-czxl6 1/1 Running 0 7m58s 10.0.6.149 ip-10-0-8-175.us-west-2.compute.internal <none> <none>
NAME READY UP-TO-DATE AVAILABLE AGE
orders 2/2 2 2 2d5h38m Normal Starting node/ip-10-0-8-175.us-west-2.compute.internal
2m46s Normal Starting node/ip-10-0-28-42.us-west-2.compute.internal
26m Normal Starting node/fargate-ip-10-0-18-197.us-west-2.compute.internal
38m Normal NodeAllocatableEnforced node/ip-10-0-8-175.us-west-2.compute.internal Updated Node Allocatable limit across pods
38m Normal Synced node/ip-10-0-8-175.us-west-2.compute.internal Node synced successfully
38m Normal Starting node/ip-10-0-8-175.us-west-2.compute.internal Starting kubelet.
38m Warning InvalidDiskCapacity node/ip-10-0-8-175.us-west-2.compute.internal invalid capacity 0 on image filesystem
38m Normal NodeHasSufficientMemory node/ip-10-0-8-175.us-west-2.compute.internal Node ip-10-0-8-175.us-west-2.compute.internal status is now: NodeHasSufficientMemory
38m Normal NodeHasNoDiskPressure node/ip-10-0-8-175.us-west-2.compute.internal Node ip-10-0-8-175.us-west-2.compute.internal status is now: NodeHasNoDiskPressure
38m Normal NodeHasSufficientPID node/ip-10-0-8-175.us-west-2.compute.internal Node ip-10-0-8-175.us-west-2.compute.internal status is now: NodeHasSufficientPID
38m Normal RegisteredNode node/ip-10-0-8-175.us-west-2.compute.internal Node ip-10-0-8-175.us-west-2.compute.internal event: Registered Node ip-10-0-8-175.us-west-2.compute.internal in Controller
38m Normal NodeReady node/ip-10-0-8-175.us-west-2.compute.internal Node ip-10-0-8-175.us-west-2.compute.internal status is now: NodeReady
26m Normal Starting node/fargate-ip-10-0-18-197.us-west-2.compute.internal Starting kubelet.
26m Warning InvalidDiskCapacity node/fargate-ip-10-0-18-197.us-west-2.compute.internal invalid capacity 0 on image filesystem
26m Normal NodeHasSufficientMemory node/fargate-ip-10-0-18-197.us-west-2.compute.internal Node fargate-ip-10-0-18-197.us-west-2.compute.internal status is now: NodeHasSufficientMemory
26m Normal NodeHasNoDiskPressure node/fargate-ip-10-0-18-197.us-west-2.compute.internal Node fargate-ip-10-0-18-197.us-west-2.compute.internal status is now: NodeHasNoDiskPressure
26m Normal NodeAllocatableEnforced node/fargate-ip-10-0-18-197.us-west-2.compute.internal Updated Node Allocatable limit across pods
26m Normal NodeHasSufficientPID node/fargate-ip-10-0-18-197.us-west-2.compute.internal Node fargate-ip-10-0-18-197.us-west-2.compute.internal status is now: NodeHasSufficientPID
26m Normal RegisteredNode node/fargate-ip-10-0-18-197.us-west-2.compute.internal Node fargate-ip-10-0-18-197.us-west-2.compute.internal event: Registered Node fargate-ip-10-0-18-197.us-west-2.compute.internal in Controller
26m Normal Synced node/fargate-ip-10-0-18-197.us-west-2.compute.internal Node synced successfully
26m Normal NodeReady node/fargate-ip-10-0-18-197.us-west-2.compute.internal Node fargate-ip-10-0-18-197.us-west-2.compute.internal status is now: NodeReady
26m Normal RemovingNode node/fargate-ip-10-0-6-242.us-west-2.compute.internal Node fargate-ip-10-0-6-242.us-west-2.compute.internal event: Removing Node fargate-ip-10-0-6-242.us-west-2.compute.internal from Controller
3m5s Normal NodeNotSchedulable node/ip-10-0-10-222.us-west-2.compute.internal Node ip-10-0-10-222.us-west-2.compute.internal status is now: NodeNotSchedulable
2m52s Normal NodeHasNoDiskPressure node/ip-10-0-28-42.us-west-2.compute.internal Node ip-10-0-28-42.us-west-2.compute.internal status is now: NodeHasNoDiskPressure
2m52s Normal NodeHasSufficientPID node/ip-10-0-28-42.us-west-2.compute.internal Node ip-10-0-28-42.us-west-2.compute.internal status is now: NodeHasSufficientPID
2m52s Normal NodeAllocatableEnforced node/ip-10-0-28-42.us-west-2.compute.internal Updated Node Allocatable limit across pods
2m52s Normal NodeHasSufficientMemory node/ip-10-0-28-42.us-west-2.compute.internal Node ip-10-0-28-42.us-west-2.compute.internal status is now: NodeHasSufficientMemory
2m52s Warning InvalidDiskCapacity node/ip-10-0-28-42.us-west-2.compute.internal invalid capacity 0 on image filesystem
2m52s Normal Starting node/ip-10-0-28-42.us-west-2.compute.internal Starting kubelet.
2m51s Normal Synced node/ip-10-0-28-42.us-west-2.compute.internal Node synced successfully
2m47s Normal RegisteredNode node/ip-10-0-28-42.us-west-2.compute.internal Node ip-10-0-28-42.us-west-2.compute.internal event: Registered Node ip-10-0-28-42.us-west-2.compute.internal in Controller
2m35s Normal NodeReady node/ip-10-0-28-42.us-west-2.compute.internal Node ip-10-0-28-42.us-west-2.compute.internal status is now: NodeReady
76s Normal NodeNotReady node/ip-10-0-10-222.us-west-2.compute.internal Node ip-10-0-10-222.us-west-2.compute.internal status is now: NodeNotReady
73s Normal DeletingNode node/ip-10-0-10-222.us-west-2.compute.internal Deleting node ip-10-0-10-222.us-west-2.compute.internal because it does not exist in the cloud provider
71s Normal RemovingNode node/ip-10-0-10-222.us-west-2.compute.internal Node ip-10-0-10-222.us-west-2.compute.internal event: Removing Node ip-10-0-10-222.us-west-2.compute.internal from Controller
70s Normal Unconsolidatable node/ip-10-0-20-192.us-west-2.compute.internal SpotToSpotConsolidation is disabled, can't replace a spot node with a spot node
70s Normal Unconsolidatable nodeclaim/default-6v8kl SpotToSpotConsolidation is disabled, can't replace a spot node with a spot nodeKarpenter는 Kubernetes 클러스터 오토스케일러로, 스케줄링되지 않은 포드를 감지하고 적절한 크기의 노드를 동적으로 프로비저닝하여 인프라 관리를 간소화합니다.
amiSelectorTerms 설정 필요
kubectl get nodes -l team=checkout
## 조회 결과
NAME STATUS ROLES AGE VERSION
ip-10-0-20-192.us-west-2.compute.internal Ready <none> 2d5h v1.25.16-eks-59bf375kubectl get nodes -l team=checkout -o jsonpath="{range .items[*]}{.metadata.name} {.spec.taints[?(@.effect=='NoSchedule')]}{\"\n\"}{end}"
## 조회 결과kubectl get pods -n checkout -o wide
## 조회 결과
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
checkout-558f7777c-5tpt9 1/1 Running 0 2d5h 10.0.25.210 ip-10-0-20-192.us-west-2.compute.internal <none> <none>
checkout-redis-f54bf7cb5-khx6c 1/1 Running 0 2d5h 10.0.26.220 ip-10-0-20-192.us-west-2.compute.internal <none> <none>while true; do kubectl get nodeclaim; echo ; kubectl get nodes -l team=checkout; echo ; kubectl get nodes -l team=checkout -o jsonpath="{range .items[*]}{.metadata.name} {.spec.taints}{\"\n\"}{end}"; echo ; kubectl get pods -n checkout -o wide; echo ; date; sleep 1; echo; doneeks-gitops-repo/apps/checkout 폴더의 deployment.yaml) replicas: 15 # 1 -> 15 로 변경cd ~/environment/eks-gitops-repo
git add apps/checkout/deployment.yaml
git commit -m "scale checkout app"
git push --set-upstream origin mainargocd app sync checkoutkubectl get nodes -l team=checkout
NAME STATUS ROLES AGE VERSION
ip-10-0-20-192.us-west-2.compute.internal Ready <none> 2d5h v1.25.16-eks-59bf375
ip-10-0-6-135.us-west-2.compute.internal Ready <none> 29s v1.25.16-eks-59bf375aws ssm get-parameter --name /aws/service/eks/optimized-ami/1.26/amazon-linux-2/recommended/image_id \
--region ${AWS_REGION} --query "Parameter.Value" --output text
# 조회 결과
ami-086414611b43bb691default-ec2nc.yaml 수정 - id: "ami-086414611b43bb691" # 1.26 버전 적용default-np.yaml 수정Node 1개씩 순차적으로 반영 설정
spec:
disruption:
consolidationPolicy: WhenUnderutilized
expireAfter: Never
budgets: # 해당 코드 추가
- nodes: "1"
while true; do kubectl get nodeclaim; echo ; kubectl get nodes -l team=checkout; echo ; kubectl get nodes -l team=checkout -o jsonpath="{range .items[*]}{.metadata.name} {.spec.taints}{\"\n\"}{end}"; echo ; kubectl get pods -n checkout -o wide; echo ; date; sleep 1; echo; done
cd ~/environment/eks-gitops-repo
git add apps/karpenter/default-ec2nc.yaml apps/karpenter/default-np.yaml
git commit -m "disruption changes"
git push --set-upstream origin main
argocd app sync karpenter
kubectl stern -n karpenter deployment/karpenter -c controllerNAME TYPE ZONE NODE READY AGE
default-5bhtl c5.large us-west-2a ip-10-0-6-135.us-west-2.compute.internal True 8m31s
default-6v8kl r4.large us-west-2b ip-10-0-20-192.us-west-2.compute.internal True 2d5h
default-w59p5 c4.large us-west-2b ip-10-0-22-247.us-west-2.compute.internal True 56s
NAME STATUS ROLES AGE VERSION
ip-10-0-20-192.us-west-2.compute.internal Ready <none> 2d5h v1.25.16-eks-59bf375
ip-10-0-22-247.us-west-2.compute.internal Ready <none> 24s v1.26.15-eks-59bf375
ip-10-0-6-135.us-west-2.compute.internal Ready <none> 8m7s v1.25.16-eks-59bf375
ip-10-0-20-192.us-west-2.compute.internal [{"effect":"NoSchedule","key":"dedicated","value":"CheckoutApp"},{"effect":"NoSchedule","key":"karpenter.sh/disruption","value":"disrupting"}]
ip-10-0-22-247.us-west-2.compute.internal [{"effect":"NoSchedule","key":"dedicated","value":"CheckoutApp"}]
ip-10-0-6-135.us-west-2.compute.internal [{"effect":"NoSchedule","key":"dedicated","value":"CheckoutApp"}]NAME TYPE ZONE NODE READY AGE
default-w59p5 c4.large us-west-2b ip-10-0-22-247.us-west-2.compute.internal True 3m54s
default-wtkwt c5.large us-west-2a ip-10-0-0-170.us-west-2.compute.internal True 2m40s
NAME STATUS ROLES AGE VERSION
ip-10-0-0-170.us-west-2.compute.internal Ready <none> 2m5s v1.26.15-eks-59bf375
ip-10-0-22-247.us-west-2.compute.internal Ready <none> 3m21s v1.26.15-eks-59bf375
ip-10-0-0-170.us-west-2.compute.internal [{"effect":"NoSchedule","key":"dedicated","value":"CheckoutApp"}]
ip-10-0-22-247.us-west-2.compute.internal [{"effect":"NoSchedule","key":"dedicated","value":"CheckoutApp"}]
kubectl get nodes --show-labels | grep self-managed
## 조회 결과
ip-10-0-10-134.us-west-2.compute.internal Ready <none> 2d4h v1.25.16-eks-59bf375 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m5.large,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-west-2,failure-domain.beta.kubernetes.io/zone=us-west-2a,k8s.io/cloud-provider-aws=a94967527effcefb5f5829f529c0a1b9,kubernetes.io/arch=amd64,kubernetes.io/hostname=ip-10-0-10-134.us-west-2.compute.internal,kubernetes.io/os=linux,node.kubernetes.io/instance-type=m5.large,node.kubernetes.io/lifecycle=self-managed,team=carts,topology.ebs.csi.aws.com/zone=us-west-2a,topology.kubernetes.io/region=us-west-2,topology.kubernetes.io/zone=us-west-2a
ip-10-0-46-171.us-west-2.compute.internal Ready <none> 2d4h v1.25.16-eks-59bf375 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m5.large,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-west-2,failure-domain.beta.kubernetes.io/zone=us-west-2c,k8s.io/cloud-provider-aws=a94967527effcefb5f5829f529c0a1b9,kubernetes.io/arch=amd64,kubernetes.io/hostname=ip-10-0-46-171.us-west-2.compute.internal,kubernetes.io/os=linux,node.kubernetes.io/instance-type=m5.large,node.kubernetes.io/lifecycle=self-managed,team=carts,topology.ebs.csi.aws.com/zone=us-west-2c,topology.kubernetes.io/region=us-west-2,topology.kubernetes.io/zone=us-west-2c
kubectl get pods -n carts -o wide
## 조회 결과
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
carts-7ddbc698d8-vs95q 1/1 Running 1 (2d4h ago) 2d4h 10.0.47.8 ip-10-0-46-171.us-west-2.compute.internal <none> <none>
carts-dynamodb-6594f86bb9-8h4k9 1/1 Running 0 2d4h 10.0.13.242 ip-10-0-10-134.us-west-2.compute.internal <none> <none>
# AMI 조회
aws ssm get-parameter --name /aws/service/eks/optimized-ami/1.26/amazon-linux-2/recommended/image_id --region $AWS_REGION --query "Parameter.Value" --output text
ami-xxxx
self_managed_node_groups = {
default-selfmng = {
instance_type = "m5.large"
min_size = 1
max_size = 2
desired_size = 2
# Additional configurations
ami_id = "ami-086414611b43bb691" # AMI 변경
disk_size = 100
# Optional
bootstrap_extra_args = "--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=self-managed,team=carts'"
# Required for self-managed node groups
create_launch_template = true
launch_template_use_name_prefix = true
}
}# 모니터링
while true; do kubectl get nodes -l node.kubernetes.io/lifecycle=self-managed; echo ; aws ec2 describe-instances --query "Reservations[*].Instances[*].[Tags[?Key=='Name'].Value | [0], ImageId]" --filters "Name=tag:Name,Values=default-selfmng" --output table; echo ; date; sleep 1; echo; done
#
cd ~/environment/terraform/
terraform apply -auto-approvekubectl get nodes -l node.kubernetes.io/lifecycle=self-managed
## 조회 결과
aws ec2 describe-instances --query "Reservations[*].Instances[*].[Tags[?Key=='Name'].Value | [0], ImageId]" --filters "Name=tag:Name,Values=default-selfmng" --output table
## 조회 결과kubectl get pods -n assets -o wide
## 확인 결과
assets-7ccc84cb4d-qzfsj 1/1 Running 0 2d4h 10.0.6.242 fargate-ip-10-0-6-242.us-west-2.compute.internal <none> <none>
kubectl get node $(kubectl get pods -n assets -o jsonpath='{.items[0].spec.nodeName}') -o wide
## 확인 결과
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
fargate-ip-10-0-6-242.us-west-2.compute.internal Ready <none> 2d4h v1.25.16-eks-2d5f260 10.0.6.242 <none> Amazon Linux 2 5.10.234-225.910.amzn2.x86_64 containerd://1.7.25# 디플로이먼트 재시작
kubectl rollout restart deployment assets -n assets
# asset 신규 파드 배포 일시 대기
kubectl wait --for=condition=Ready pods --all -n assets --timeout=180s
kubectl get pods -n assets -o wide
## 확인 결과
assets-dd595ff54-7z54b 1/1 Running 0 94s 10.0.18.197 fargate-ip-10-0-18-197.us-west-2.compute.internal <none> <none>
kubectl get node $(kubectl get pods -n assets -o jsonpath='{.items[0].spec.nodeName}') -o wide
## 확인 결과
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
fargate-ip-10-0-18-197.us-west-2.compute.internal Ready <none> 70s v1.26.15-eks-2d5f260 10.0.18.197 <none> Amazon Linux 2 5.10.234-225.910.amzn2.x86_64 containerd://1.7.25kubectl get node
## 조회 결과
fargate-ip-10-0-18-197.us-west-2.compute.internal Ready <none> 49m v1.26.15-eks-2d5f260
ip-10-0-0-170.us-west-2.compute.internal Ready <none> 3m31s v1.26.15-eks-59bf375
ip-10-0-13-56.us-west-2.compute.internal Ready <none> 3h10m v1.26.15-eks-59bf375
ip-10-0-22-247.us-west-2.compute.internal Ready <none> 4m47s v1.26.15-eks-59bf375
ip-10-0-26-122.us-west-2.compute.internal Ready <none> 3h11m v1.26.15-eks-59bf375
ip-10-0-28-42.us-west-2.compute.internal Ready <none> 25m v1.26.15-eks-59bf375
ip-10-0-37-112.us-west-2.compute.internal Ready <none> 20m v1.26.15-eks-59bf375
ip-10-0-8-175.us-west-2.compute.internal Ready <none> 61m v1.26.15-eks-59bf375
kubectl get node --label-columns=eks.amazonaws.com/capacityType,node.kubernetes.io/lifecycle,karpenter.sh/capacity-type,eks.amazonaws.com/compute-type
## 조회 결과
NAME STATUS ROLES AGE VERSION CAPACITYTYPE LIFECYCLE CAPACITY-TYPE COMPUTE-TYPE
fargate-ip-10-0-18-197.us-west-2.compute.internal Ready <none> 50m v1.26.15-eks-2d5f260 fargate
ip-10-0-0-170.us-west-2.compute.internal Ready <none> 4m42s v1.26.15-eks-59bf375 spot
ip-10-0-13-56.us-west-2.compute.internal Ready <none> 3h11m v1.26.15-eks-59bf375 ON_DEMAND
ip-10-0-22-247.us-west-2.compute.internal Ready <none> 5m58s v1.26.15-eks-59bf375 spot
ip-10-0-26-122.us-west-2.compute.internal Ready <none> 3h13m v1.26.15-eks-59bf375 ON_DEMAND
ip-10-0-28-42.us-west-2.compute.internal Ready <none> 26m v1.26.15-eks-59bf375 self-managed
ip-10-0-37-112.us-west-2.compute.internal Ready <none> 21m v1.26.15-eks-59bf375 self-managed
ip-10-0-8-175.us-west-2.compute.internal Ready <none> 62m v1.26.15-eks-59bf375 ON_DEMAND
kubectl get node -L eks.amazonaws.com/nodegroup,karpenter.sh/nodepool
## 조회 결과
fargate-ip-10-0-18-197.us-west-2.compute.internal Ready <none> 51m v1.26.15-eks-2d5f260
ip-10-0-0-170.us-west-2.compute.internal Ready <none> 5m9s v1.26.15-eks-59bf375 default
ip-10-0-13-56.us-west-2.compute.internal Ready <none> 3h12m v1.26.15-eks-59bf375 initial-2025033004585862020000002a
ip-10-0-22-247.us-west-2.compute.internal Ready <none> 6m25s v1.26.15-eks-59bf375 default
ip-10-0-26-122.us-west-2.compute.internal Ready <none> 3h13m v1.26.15-eks-59bf375 initial-2025033004585862020000002a
ip-10-0-28-42.us-west-2.compute.internal Ready <none> 27m v1.26.15-eks-59bf375
ip-10-0-37-112.us-west-2.compute.internal Ready <none> 21m v1.26.15-eks-59bf375
ip-10-0-8-175.us-west-2.compute.internal Ready <none> 62m v1.26.15-eks-59bf375 green-mng-20250401094211402300000007
kubectl get node --label-columns=node.kubernetes.io/instance-type,kubernetes.io/arch,kubernetes.io/os,topology.kubernetes.io/zone
## 조회 결과
fargate-ip-10-0-18-197.us-west-2.compute.internal Ready <none> 51m v1.26.15-eks-2d5f260 amd64 linux us-west-2b
ip-10-0-0-170.us-west-2.compute.internal Ready <none> 5m30s v1.26.15-eks-59bf375 c5.large amd64 linux us-west-2a
ip-10-0-13-56.us-west-2.compute.internal Ready <none> 3h12m v1.26.15-eks-59bf375 m5.large amd64 linux us-west-2a
ip-10-0-22-247.us-west-2.compute.internal Ready <none> 6m46s v1.26.15-eks-59bf375 c4.large amd64 linux us-west-2b
ip-10-0-26-122.us-west-2.compute.internal Ready <none> 3h13m v1.26.15-eks-59bf375 m5.large amd64 linux us-west-2b
ip-10-0-28-42.us-west-2.compute.internal Ready <none> 27m v1.26.15-eks-59bf375 m5.large amd64 linux us-west-2b
ip-10-0-37-112.us-west-2.compute.internal Ready <none> 22m v1.26.15-eks-59bf375 m5.large amd64 linux us-west-2c
ip-10-0-8-175.us-west-2.compute.internal Ready <none> 63m v1.26.15-eks-59bf375 m5.large amd64 linux us-west-2a