μ€λμ PKOS μ€ν°λμ λ§μ§λ§ 5μ£Όμ°¨μ μ£Όμ μΈ λ³΄μμ λν΄μ μμκ° λ³΄λλ‘ νμ!
Pod λ΄μμ EC2 λ©νλ°μ΄ν° IAM ν ν°μ 보λ₯Ό μ¬μ©ν΄ AWS μλΉμ€ μ¬μ©ν΄λ³΄λ κ²μ μ€μ΅ν΄λ³΄μ.
μΈμ€ν΄μ€λ΄μμ EC2μμ μ¬μ©νλ λ©νλ°μ΄ν°λ₯Ό μ‘°ννκ³ νμ©ν μ μλλ‘ μ 곡νλ©°
"http://169.254.169.254/latest/meta-data/" λ₯Ό ν΅ν΄ νμΈ ν μ μλ€.
curl 169.254.169.254/latest/meta-data/
user-data(sparkandassociates:harbor) [root@kops-ec2 ~]# curl 169.254.169.254/latest/meta-data
ami-id
ami-launch-index
ami-manifest-path
block-device-mapping/
events/
hostname
identity-credentials/
instance-action
instance-id
instance-life-cycle
instance-type
local-hostname
local-ipv4
mac
metrics/
network/
placement/
profile
public-hostname
public-ipv4
public-keys/
reservation-id
security-groups
services/
μ°Έκ³ λ‘ Openstack λ΄ μΈμ€ν΄μ€λ λμΌνκ² λ©νλ°μ΄ν° μλ²λ₯Ό μ¬μ© κ°λ₯νλ€.
root@y-1:~# curl 169.254.169.254/latest/meta-data
ami-id
ami-launch-index
ami-manifest-path
block-device-mapping/
hostname
instance-action
instance-id
instance-type
local-hostname
local-ipv4
placement/
public-hostname
public-ipv4
public-keys/
reservation-id
security-groupsroot@y-1:~#
pod λ΄μμ κΈ°λ³Έμ μΌλ‘ μ‘°νκ° μλλλ‘ λ³΄μμ€μ μ΄ λμ΄μλ€.
# netshoot-pod μμ±
cat <<EOF | kubectl create -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: netshoot-pod
spec:
replicas: 2
selector:
matchLabels:
app: netshoot-pod
template:
metadata:
labels:
app: netshoot-pod
spec:
containers:
- name: netshoot-pod
image: nicolaka/netshoot
command: ["tail"]
args: ["-f", "/dev/null"]
terminationGracePeriodSeconds: 0
EOF
# νλ μ΄λ¦ λ³μ μ§μ
PODNAME1=$(kubectl get pod -l app=netshoot-pod -o jsonpath={.items[0].metadata.name})
PODNAME2=$(kubectl get pod -l app=netshoot-pod -o jsonpath={.items[1].metadata.name})
# EC2 λ©νλ°μ΄ν° μ 보 νμΈ
(sparkandassociates:harbor) [root@kops-ec2 ~]# kubectl exec -it $PODNAME1 -- curl 169.254.169.254 ;echo
(sparkandassociates:harbor) [root@kops-ec2 ~]# kubectl exec -it $PODNAME2 -- curl 169.254.169.254 ;echo
μ컀 λ
Έλ 1λμμ EC2 λ©νλ°μ΄ν° 보μμ μ κ±°νκ³ λ€μ ν΄λ³΄μ.
(nodes-ap-northeast-2a, nodes-ap-northeast-2c μ컀 λλμ€ μ²«λ²μ§Έ μμ»€λ§ μ μ©)
#
kops edit ig nodes-ap-northeast-2a
---
# μλ 3μ€ μ κ±°
spec:
instanceMetadata:
httpPutResponseHopLimit: 1
httpTokens: required
---
# μ
λ°μ΄νΈ μ μ© : λ
Έλ1λ λ‘€λ§μ
λ°μ΄νΈ
kops update cluster --yes && echo && sleep 3 && kops rolling-update cluster --yes
..
..
Detected single-control-plane cluster; won't detach before draining
NAME STATUS NEEDUPDATE READY MIN TARGET MAX NODES
control-plane-ap-northeast-2a Ready 0 1 1 1 1 1
nodes-ap-northeast-2a NeedsUpdate 1 0 1 1 1 1
nodes-ap-northeast-2c Ready 0 1 1 1 1 1
..
..
I0403 10:40:08.777396 23960 instancegroups.go:467] waiting for 15s after terminating instance
I0403 10:40:23.779129 23960 instancegroups.go:501] Validating the cluster.
I0403 10:40:24.601875 23960 instancegroups.go:540] Cluster validated; revalidating in 10s to make sure it does not flap.
I0403 10:40:35.248158 23960 instancegroups.go:537] Cluster validated.
I0403 10:40:35.248192 23960 rollingupdate.go:234] Rolling update completed for cluster "sparkandassociates.net"!
λ€μ νλ1,2μμ EC2 λ©νλ°μ΄ν°λ₯Ό νμΈν΄λ³΄μ.
(sparkandassociates:harbor) [root@kops-ec2 ~]# kops get instances
ID NODE-NAME STATUS ROLES STATE INTERNAL-IP INSTANCE-GROUP MACHINE-TYPE
i-066cf2f8937746e50 i-066cf2f8937746e50 UpToDate node 172.30.83.26 nodes-ap-northeast-2c.sparkandassociates.net c5a.2xlarge
i-0c421069027ec2d2d i-0c421069027ec2d2d UpToDate node 172.30.34.113 nodes-ap-northeast-2a.sparkandassociates.net c5a.2xlarge
i-0d3de3051f46d267d i-0d3de3051f46d267d UpToDate control-plane 172.30.55.185 control-plane-ap-northeast-2a.masters.sparkandassociates.net c5a.2xlarge
(sparkandassociates:harbor) [root@kops-ec2 ~]# kubectl get pod -l app=netshoot-pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
netshoot-pod-7757d5dd99-qhgjv 1/1 Running 0 8m1s 172.30.49.190 i-0c421069027ec2d2d <none> <none>
netshoot-pod-7757d5dd99-x5lts 1/1 Running 0 17m 172.30.83.71 i-066cf2f8937746e50 <none> <none>
(sparkandassociates:harbor) [root@kops-ec2 ~]# k get nodes i-0c421069027ec2d2d -o yaml | grep topology.kubernetes.io/zone
topology.kubernetes.io/zone: ap-northeast-2a
(sparkandassociates:harbor) [root@kops-ec2 ~]# k get nodes i-066cf2f8937746e50 -o yaml | grep topology.kubernetes.io/zone
topology.kubernetes.io/zone: ap-northeast-2c
(sparkandassociates:harbor) [root@kops-ec2 ~]#
# EC2 meta 보μμ μ κ±°ν λ
Έλ i-0c421069027ec2d2d μ λ°°ν¬λ podμμ ec2 metadata μ‘°νκ° κ°λ₯νλ€!!
(sparkandassociates:harbor) [root@kops-ec2 ~]# kubectl exec -it $PODNAME1 -- curl 169.254.169.254 ;echo
1.0
2007-01-19
2007-03-01
2007-08-29
2007-10-10
2007-12-15
2008-02-01
2008-09-01
2009-04-04
2011-01-01
2011-05-01
2012-01-12
2014-02-25
2014-11-05
2015-10-20
2016-04-19
2016-06-30
2016-09-02
2018-03-28
2018-08-17
2018-09-24
2019-10-01
2020-10-27
2021-01-03
2021-03-23
2021-07-15
2022-09-24
latest
## i-066cf2f8937746e50 λ
Έλμ λ°°ν¬λ netshoot podμμλ μ‘°ν λΆκ°νλ€.
(sparkandassociates:harbor) [root@kops-ec2 ~]# kubectl exec -it $PODNAME2 -- curl 169.254.169.254 ;echo
(sparkandassociates:harbor) [root@kops-ec2 ~]#
## λ€μ pod 1μμ ν ν°μ 보λ₯Ό μ»μ΄λ³΄μ.
(sparkandassociates:harbor) [root@kops-ec2 ~]# kubectl exec -it $PODNAME1 -- curl 169.254.169.254/latest/meta-data/iam/security-credentials/ ;echo
nodes.sparkandassociates.net
(sparkandassociates:harbor) [root@kops-ec2 ~]# kubectl exec -it $PODNAME1 -- curl 169.254.169.254/latest/meta-data/iam/security-credentials/nodes.$KOPS_CLUSTER_NAME | jq
{
"Code": "Success",
"LastUpdated": "2023-04-03T01:40:01Z",
"Type": "AWS-HMAC",
"AccessKeyId": "ASIA3NGF...UUFH6IL5",
"SecretAccessKey": "avvnBTS+zHB...55vl9Qp",
"Token": "IQoJb3JpZ2luX2VjEPL//////////wEaDmFwLW5vcnRoZWFzdC0yIkgwRgIhAO1bRK...xjCJ3aihBjqwARkFYS+ye6qItlZqbjxOZbA4CEE79Pnn8Qt3UuRn+QqyFW1b7cseWPf24+LlKuyefBUENCpsoNNdJyF0+8FRCl2bG3vWRxfDl0TjlMmjlcB/k/tkdB8NrhAbjA/Y1g7q1va0Zgvu5so1R6yWFPrOSpp6PL6smibnVGb120++BLMC1VDgUEKM6wBX5mkJ2azjuCcWdj7Qbq3VXBNOGw/PCBVpF4jwbofcQxVKaxEvJc1m",
"Expiration": "2023-04-03T08:15:25Z"
}
νλμμ νμ·¨ν EC2 λ©νλ°μ΄ν° IAM role token μ 보λ₯Ό νμ©ν΄μ
python boto3λ₯Ό ν΅ν΄ SDKλ‘ AWS μλΉμ€λ₯Ό μ¬μ©ν΄λ³΄μ.
# boto3 μ¬μ©μ μν νλ μμ±
cat <<EOF | kubectl create -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: boto3-pod
spec:
replicas: 2
selector:
matchLabels:
app: boto3
template:
metadata:
labels:
app: boto3
spec:
containers:
- name: boto3
image: jpbarto/boto3
command: ["tail"]
args: ["-f", "/dev/null"]
terminationGracePeriodSeconds: 0
EOF
# νμΈ
(sparkandassociates:harbor) [root@kops-ec2 ~]# k get pod -o wide -l app=boto3
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
boto3-pod-7944d7b4db-d4z4n 1/1 Running 0 2m35s 172.30.58.42 i-0c421069027ec2d2d <none> <none>
boto3-pod-7944d7b4db-gnrpj 1/1 Running 0 12s 172.30.83.72 i-066cf2f8937746e50 <none> <none>
μνλ§μΌλ‘ μΈμ€ν΄μ€ μ 보쑰νλΆν° ν΄λ³΄μ.
"μΈμ€ν΄μ€ μ 보 μ‘°ννλ μ½λ"
import boto3
ec2 = boto3.client('ec2', region_name = 'ap-northeast-2')
response = ec2.describe_instances()
print(response)
sample μ½λ μ€λͺ
μ regionμ΄ λΉ μ Έμλλ° μ΄λ΄κ²½μ° μλ¬κ° λλ―λ‘ boto3.client μ region_nameμ λ°λμ λ£μ΄μ£Όμ.
(ref: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/ec2-example-managing-instances.html)
μΈμ€ν΄μ€ λͺ¨λ μμΈμ λ³΄κ° μΆλ ₯λλ€.
credential μ λ³΄κ° μλ€κ³ λμ¨λ€.
"s3 filedownload μ½λ"
import boto3
s3 = boto3.client('s3')
s3.download_file('BUCKET_NAME', 'OBJECT_NAME', 'FILE_NAME')
S3 "pkos2" λ²ν·λ΄μ admin secret νμΌμ λ€μ΄λ‘λ λ°μ보μ.
# cat s3.py
import boto3
BUCKET_NAME = 'pkos2'
OBJECT_NAME = 'sparkandassociates.net/secrets/admin'
FILE_NAME = 'admin'
s3 = boto3.client('s3', region_name = 'ap-northeast-2')
s3.download_file(BUCKET_NAME, OBJECT_NAME, FILE_NAME)
~/dev #
# λ€μ΄λ‘λ μ½λ μ€ν
~/dev # python s3.py
403μλ¬κ° λ°μνλλ°
ν΄λΉ EC2 μΈμ€ν΄μ€ IAM roleμ S3 κΆνμ΄ λΆμ‘±ν΄μ κ·Έλ λ€.
κΆνμ μΆκ°ν΄μ£Όμ.
νμ¬ μ‘°νκΆνλ§ μκ³
object get κΆνμ΄ μμ΄ object λ€μ΄λ‘λκ° λΆκ°νλ€.
"GetObject" κΆνμ μΆκ°ν΄μ€λ€.
κΆνμ μΆκ°νκ³ λ€μ λ€μ΄λ‘λ μ½λ μ€ν.
νμ·¨ν POD λ΄μμ S3 λ²ν·μμ νμΌμ λ€μ΄λ‘λ λ°μλ€.
kubescape λ k8s ν΄λ¬μ€ν°μ μ·¨μ½μ μ μ κ²ν΄μ£Όλ ν΄μ΄λ©°
yaml, helm μ°¨νΈλ₯Ό μ§λ¨νλ€λ μ μ΄ νΉμ§μ΄λ€.
(μ΄λ―Έμ§μΆμ² : https://github.com/kubescape/kubescape/blob/master/docs/architecture.md )
curl -s https://raw.githubusercontent.com/kubescape/kubescape/master/install.sh | /bin/bash
## download artifacts
kubescape download artifacts
(sparkandassociates:harbor) [root@kops-ec2 ~]# tree ~/.kubescape/
/root/.kubescape/
βββ allcontrols.json
βββ armobest.json
βββ attack-tracks.json
βββ cis-eks-t1.2.0.json
βββ cis-v1.23-t1.0.1.json
βββ controls-inputs.json
βββ devopsbest.json
βββ exceptions.json
βββ mitre.json
βββ nsa.json
# μ 곡νλ μ μ±
μ μλμ κ°μ΄ νμΈκ°λ₯νλ€.
kubescape list controls
(sparkandassociates:harbor) [root@kops-ec2 ~]# kubescape list controls
+------------+---------------------------------------------------------------+------------------------------------+------------+
| CONTROL ID | CONTROL NAME | DOCS | FRAMEWORKS |
+------------+---------------------------------------------------------------+------------------------------------+------------+
| C-0001 | Forbidden Container Registries | https://hub.armosec.io/docs/c-0001 | |
+------------+---------------------------------------------------------------+------------------------------------+------------+
| C-0002 | Exec into container | https://hub.armosec.io/docs/c-0002 | |
+------------+---------------------------------------------------------------+------------------------------------+------------+
| C-0004 | Resources memory limit and | https://hub.armosec.io/docs/c-0004 | |
| | request | | |
+------------+---------------------------------------------------------------+------------------------------------+------------+
| C-0005 | API server insecure port is | https://hub.armosec.io/docs/c-0005 | |
| | enabled | | |
+------------+---------------------------------------------------------------+------------------------------------+------------+
| C-0007 | Data Destruction | https://hub.armosec.io/docs/c-0007 | |
+------------+---------------------------------------------------------------+------------------------------------+------------+
| C-0009 | Resource limits | https://hub.armosec.io/docs/c-0009 | |
...
(sparkandassociates:harbor) [root@kops-ec2 ~]# kubescape scan --enable-host-scan --verbose
host-scanner λΌλ νλκ° κ° λ Έλλ§λ€ κΈ°λλλ©° ν΄λ¬μ€ν° μ κ²μ μ§ννλ€.
kubescape-host-scanner host-scanner-j4t5z 1/1 Running 0 6s
kubescape-host-scanner host-scanner-n2v78 1/1 Running 0 6s
kubescape-host-scanner host-scanner-v7j7d 1/1 Running 0 6s
scan κ²°κ³Όλ₯Ό μλμ κ°μ΄ νμΈν μ μλ€.
Controls: 65 (Failed: 35, Passed: 22, Action Required: 8)
Failed Resources by Severity: Critical β 0, High β 83, Medium β 370, Low β 128
+----------+-------------------------------------------------------+------------------+---------------+--------------------+
| SEVERITY | CONTROL NAME | FAILED RESOURCES | ALL RESOURCES | % RISK-SCORE |
+----------+-------------------------------------------------------+------------------+---------------+--------------------+
| Critical | API server insecure port is enabled | 0 | 1 | 0% |
| Critical | Disable anonymous access to Kubelet service | 0 | 3 | 0% |
| Critical | Enforce Kubelet client TLS authentication | 0 | 6 | 0% |
| Critical | CVE-2022-39328-grafana-auth-bypass | 0 | 1 | 0% |
| High | Forbidden Container Registries | 0 | 65 | Action Required * |
| High | Resources memory limit and request | 0 | 65 | Action Required * |
| High | Resource limits | 49 | 65 | 76% |
| High | Applications credentials in configuration files | 0 | 147 | Action Required * |
| High | List Kubernetes secrets | 20 | 108 | 19% |
| High | Host PID/IPC privileges | 1 | 65 | 1% |
| High | HostNetwork access | 6 | 65 | 8% |
| High | Writable hostPath mount | 3 | 65 | 4% |
| High | Insecure capabilities | 0 | 65 | 0% |
| High | HostPath mount | 3 | 65 | 4% |
| High | Resources CPU limit and request | 0 | 65 | Action Required * |
| High | Instance Metadata API | 0 | 0 | 0% |
| High | Privileged container | 1 | 65 | 1% |
| High | CVE-2021-25742-nginx-ingress-snippet-annotation-vu... | 0 | 1 | 0% |
| High | Workloads with Critical vulnerabilities exposed to... | 0 | 0 | Action Required ** |
| High | Workloads with RCE vulnerabilities exposed to exte... | 0 | 0 | Action Required ** |
| High | CVE-2022-23648-containerd-fs-escape | 0 | 3 | 0% |
| High | RBAC enabled | 0 | 1 | 0% |
| High | CVE-2022-47633-kyverno-signature-bypass | 0 | 0 | 0% |
| Medium | Exec into container | 2 | 108 | 2% |
| Medium | Data Destruction | 9 | 108 | 8% |
kubescapeμμ armo λΌλ μΉμλΉμ€λ₯Ό μ 곡νλ€.
λΈλΌμ°μ μμ portal.armo.cloud μ μ
νμκ°μ
ν μλ μ§λ¨μ μνλ ν΄λ¬μ€ν°μ μ€νν
helm repo, chart install λͺ
λ Ήμ μλ΄ν΄μ£Όλ©°, κ·Έλλ‘ λ³΅μ¬ν΄μ μ€νν΄μ£Όλ©΄ λλ€.
λ΄ ν΄λ¬μ€ν°μ μ€ν
(sparkandassociates:harbor) [root@kops-ec2 ~]# helm repo add kubescape https://kubescape.github.io/helm-charts/ ; helm repo update ; helm upgrade --install kubescape kubescape/kubescape-cloud-operator -n kubescape --create-namespace --set clusterName=`kubectl config current-context` --set account=a014fa2a-98a6df
"kubescape" has been added to your repositories
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "kubescape" chart repository
...Successfully got an update from the "harbor" chart repository
...Successfully got an update from the "argo" chart repository
...Successfully got an update from the "prometheus-community" chart repository
...Successfully got an update from the "gitlab" chart repository
...Successfully got an update from the "bitnami" chart repository
Update Complete. βHappy Helming!β
Release "kubescape" does not exist. Installing it now.
NAME: kubescape
LAST DEPLOYED: Mon Apr 3 17:03:15 2023
NAMESPACE: kubescape
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thank you for installing kubescape-cloud-operator version 1.10.8.
You can see and change the values of your's recurring configurations daily scan in the following link:
https://cloud.armosec.io/settings/assets/clusters/scheduled-scans?cluster=sparkandassociates-net
> kubectl -n kubescape get cj kubescape-scheduler -o=jsonpath='{.metadata.name}{"\t"}{.spec.schedule}{"\n"}'
You can see and change the values of your's recurring images daily scan in the following link:
https://cloud.armosec.io/settings/assets/images
> kubectl -n kubescape get cj kubevuln-scheduler -o=jsonpath='{.metadata.name}{"\t"}{.spec.schedule}{"\n"}'
See you!!!
(sparkandassociates:harbor) [root@kops-ec2 ~]# kubectl -n kubescape get all
NAME READY STATUS RESTARTS AGE
pod/gateway-5b987fff9f-98shv 1/1 Running 0 22h
pod/kollector-0 1/1 Running 0 22h
pod/kubescape-6884bcf5b7-22vtp 1/1 Running 0 22h
pod/kubescape-scheduler-28009247-dbbzv 0/1 Completed 0 9h
pod/kubevuln-6d964b688c-m45jm 1/1 Running 0 22h
pod/kubevuln-scheduler-28009808-mfblx 0/1 Completed 0 10m
pod/operator-867c5bcdff-gj7v8 1/1 Running 0 22h
pod/otel-collector-5f69f464d7-cr48x 1/1 Running 0 22h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/gateway ClusterIP 100.68.29.246 <none> 8001/TCP,8002/TCP 22h
service/kubescape ClusterIP 100.69.30.16 <none> 8080/TCP 22h
service/kubevuln ClusterIP 100.69.146.86 <none> 8080/TCP,8000/TCP 22h
service/operator ClusterIP 100.64.97.181 <none> 4002/TCP 22h
service/otel-collector ClusterIP 100.64.43.12 <none> 4317/TCP 22h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/gateway 1/1 1 1 22h
deployment.apps/kubescape 1/1 1 1 22h
deployment.apps/kubevuln 1/1 1 1 22h
deployment.apps/operator 1/1 1 1 22h
deployment.apps/otel-collector 1/1 1 1 22h
NAME DESIRED CURRENT READY AGE
replicaset.apps/gateway-5b987fff9f 1 1 1 22h
replicaset.apps/kubescape-6884bcf5b7 1 1 1 22h
replicaset.apps/kubevuln-6d964b688c 1 1 1 22h
replicaset.apps/operator-867c5bcdff 1 1 1 22h
replicaset.apps/otel-collector-5f69f464d7 1 1 1 22h
NAME READY AGE
statefulset.apps/kollector 1/1 22h
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
cronjob.batch/kubescape-scheduler 47 20 * * * False 0 9h 22h
cronjob.batch/kubevuln-scheduler 8 6 * * * False 0 10m 22h
NAME COMPLETIONS DURATION AGE
job.batch/kubescape-scheduler-28009247 1/1 6s 9h
job.batch/kubevuln-scheduler-28009808 1/1 4s 10m
ARMO μΉμλΉμ€μ λ΄κ° λ±λ‘ ν ν΄λ¬μ€ν° 리μ€νΈμ μ λ³΄κ° λ³΄μ΄λ©°
ν΄λΉ ν΄λ¬μ€ν°μ 보μ μ·¨μ½μ λ€μ΄ μ€μΊλλμ κ²°κ³Όλ‘ λ³΄μ¬μ§λ€.
"FIX" λ²νΌμ λλ₯΄κ² λλ©΄ yaml μλν°κ° μ΄λ¦¬λ©΄μ κ±°κΈ° μ·¨μ½μ μ κ°μ ν μ μλλ‘
ν΄λΉ λΌμΈμ νμ΄λΌμ΄νΈμ κ°μ λ£μΌλΌλ μλ΄κ° λμ¨λ€.
λ€λ§, 보μμ€μΊμ΄λΌλ건 μΌλ°μ μΈ λ³΄μ κΈ°μ€μ λ§μΆ° μλ΄νλ κ²μ΄λ―λ‘
λ΄κ° μ¬μ©νλ νκ²½ λ° κ΅¬μ±μ λ§μΆ° νμν λΆλΆλ§ μ μ©μ νκ³ κ·ΈμΈ λΆλΆμ "Ignore" λ₯Ό 체ν¬νμ¬ skip νλ©΄ λλ€.
μμ μ CKS μκ²©μ¦ μ·¨λν λ μ¬μ©νλ κ²½νμ΄ μκ°λ killer λ¬Έμ λ₯Ό λ€μ κΊΌλ΄μ΄ 보μλ€.
λ¨Όμ μλμ κ°μ΄ deployment μ΄μ©ν΄μ podμ λ°°ν¬νλ€.
apiVersion: apps/v1
kind: Deployment
metadata:
name: immutable-deployment
labels:
app: immutable-deployment
spec:
replicas: 1
selector:
matchLabels:
app: immutable-deployment
template:
metadata:
labels:
app: immutable-deployment
spec:
containers:
- image: busybox:1.32.0
command: ['sh', '-c', 'tail -f /dev/null']
imagePullPolicy: IfNotPresent
name: busybox
restartPolicy: Always
λ°°ν¬ν podμ / κ²½λ‘μ νμΌμ μμ±ν΄λ³Έλ€.
(sparkandassociates:harbor) [root@kops-ec2 ~]# k exec immutable-deployment-698dc94df9-xsdpt -- touch /abc.txt
(sparkandassociates:harbor) [root@kops-ec2 ~]# k exec immutable-deployment-698dc94df9-xsdpt -- ls -al /abc.txt
-rw-r--r-- 1 root root 0 Apr 7 13:56 /abc.txt
(sparkandassociates:harbor) [root@kops-ec2 ~]#
μ΄λ²μλ security contextμ readOnlyRootFilesystem μ μ μ©ν΄λ³΄μ.
apiVersion: apps/v1
kind: Deployment
metadata:
name: immutable-deployment
labels:
app: immutable-deployment
spec:
replicas: 1
selector:
matchLabels:
app: immutable-deployment
template:
metadata:
labels:
app: immutable-deployment
spec:
containers:
- image: busybox:1.32.0
command: ['sh', '-c', 'tail -f /dev/null']
imagePullPolicy: IfNotPresent
name: busybox
securityContext: # add
readOnlyRootFilesystem: true # add
volumeMounts: # add
- mountPath: /tmp # add
name: temp-vol # add
volumes: # add
- name: temp-vol # add
emptyDir: {} # add
restartPolicy: Always
μ¬μμ±ν νμΌμ λ€μ μμ±ν΄λ³΄μ.
(sparkandassociates:harbor) [root@kops-ec2 ~]# k delete -f 1.yaml
deployment.apps "immutable-deployment" deleted
(sparkandassociates:harbor) [root@kops-ec2 ~]# k create -f 1.yaml
deployment.apps/immutable-deployment created
(sparkandassociates:harbor) [root@kops-ec2 ~]#
(sparkandassociates:harbor) [root@kops-ec2 ~]# k exec immutable-deployment-6dc8987698-7stxq -- touch /abc.txt
touch: /abc.txt: Read-only file system
command terminated with exit code 1
(sparkandassociates:harbor) [root@kops-ec2 ~]#
Read-only file system μ΄λΌλ μλ¬μ ν¨κ» μμ±μ μ€ν¨νλ€. μ¦, 보μμ μ©μ΄ λκ²μ΄λ€.
polaris λ 보μμ κ² λꡬμ΄λ©° μΉ μλΉμ€λ₯Ό μ 곡νλ€.
# μ€μΉ
kubectl create ns polaris
#
cat <<EOT > polaris-values.yaml
dashboard:
replicas: 1
service:
type: LoadBalancer
EOT
# λ°°ν¬
helm repo add fairwinds-stable https://charts.fairwinds.com/stable
helm install polaris fairwinds-stable/polaris --namespace polaris --version 5.7.2 -f polaris-values.yaml
# CLBμ ExternanDNS λ‘ λλ©μΈ μ°κ²°
kubectl annotate service polaris-dashboard "external-dns.alpha.kubernetes.io/hostname=polaris.$KOPS_CLUSTER_NAME" -n polaris
# μΉ μ μ μ£Όμ νμΈ λ° μ μ
(sparkandassociates:harbor) [root@kops-ec2 ~]# echo -e "Polaris Web URL = http://polaris.$KOPS_CLUSTER_NAME"
Polaris Web URL = http://polaris.sparkandassociates.net
μ΄λ κ² λ³΄μμ·¨μ½μ μ μλ΄ν΄μ£Όκ³ κ°μ΄λλ μ 곡νλ€.
μ¨νλ λ―Έμ€ νκ²½μμ λλ¦°λ€λ μ μμ 보μμ μΌλ‘ μ’μΌλ
μ¬μ©μ±μ΄λ 보μμ§λ¨, μ‘°μΉκ°μ΄λ λ±μ μμ μ¬μ©ν kubescape armo κ° λ λ°μ΄λλ― νλ€.
μ κ· μλΉμ€ μ΄μΉ΄μ΄νΈ(SA) μμ± ν 'ν΄λ¬μ€ν° μμ€(λͺ¨λ λ€μμ€νμ΄μ€ ν¬ν¨)μμ μ½κΈ° μ μ©'μ κΆνμ μ£Όκ³ ν μ€νΈ
(sparkandassociates:harbor) [root@kops-ec2 ~]# k create sa master
(sparkandassociates:harbor) [root@kops-ec2 ~]# kubectl create clusterrole pod-reader --verb=get,list,watch --resource=pods
clusterrole.rbac.authorization.k8s.io/pod-reader created
(sparkandassociates:harbor) [root@kops-ec2 ~]# k create clusterrolebinding master --clusterrole pod-reader --serviceaccount default:master
clusterrolebinding.rbac.authorization.k8s.io/master created
# νμΈ
(sparkandassociates:harbor) [root@kops-ec2 ~]# k get sa master -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2023-04-07T14:13:33Z"
name: master
namespace: harbor
resourceVersion: "5061627"
uid: 4527c4ba-f8e6-4987-a86a-007538ede6ab
(sparkandassociates:harbor) [root@kops-ec2 ~]# k get clusterrole pod-reader -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: "2023-04-07T14:23:06Z"
name: pod-reader
resourceVersion: "5063951"
uid: 9b4f48a6-810c-40fb-99e6-68276adf9ece
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- watch
(sparkandassociates:harbor) [root@kops-ec2 ~]# k get clusterrolebindings master
NAME ROLE AGE
master ClusterRole/pod-reader 52s
(sparkandassociates:harbor) [root@kops-ec2 ~]# k get clusterrolebindings master -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: "2023-04-07T14:25:21Z"
name: master
resourceVersion: "5064502"
uid: 1485d98d-d7ed-410f-833a-05a3716530f2
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: pod-reader
subjects:
- kind: ServiceAccount
name: master
namespace: default
μμ±ν SA μΈ "master" κ³μ μ κΆν νμΈμ kubectl μ auth can-i λ₯Ό νμ©νλ©΄ λλ€.
## pod μμ± κΆν νμΈ -> "no"
(sparkandassociates:harbor) [root@kops-ec2 ~]# k auth can-i create pod --as system:serviceaccount:default:master
no
## pod read κΆν -> "yes"
(sparkandassociates:harbor) [root@kops-ec2 ~]# k auth can-i get pod --as system:serviceaccount:default:master
yes
## λΆμ¬νμ§ μμ secret κΆν μΆκ° νμΈ -> "no"
(sparkandassociates:harbor) [root@kops-ec2 ~]# k auth can-i get secret --as system:serviceaccount:default:master
no