secret 프로젝트에 넣으세요
인증서 만들고 ~
👻 인증서 환경 구성
yji@k8s-master:~/sec-https$ mkdir -p ./secret/cert
yji@k8s-master:~/sec-https$ mkdir -p ./secret/config
yji@k8s-master:~/sec-https$ mkdir -p ./secret/kubetmp
yji@k8s-master:~/sec-https$ cd ./secret/cert
yji@k8s-master:~/sec-https/secret/cert$ openssl genrsa -out https.key 2048
Generating RSA private key, 2048 bit long modulus (2 primes)
...............................................+++++
....................................+++++
e is 65537 (0x010001)
yji@k8s-master:~/sec-https/secret/cert$ openssl req -new -x509 -key https.key -out https.cert -days 360 -subj /CN=*.kakao.io
yji@k8s-master:~/sec-https/secret/cert$ ls
https.cert https.key
yji@k8s-master:~/sec-https/secret/cert$ kubectl create secret generic dshub-https --from-file=https.key --from-file=https.cert
secret/dshub-https created
yji@k8s-master:~/sec-https/secret/cert$ ls
https.cert https.key
yji@k8s-master:~/sec-https/secret/cert$ kubectl get secret/dshub-https
NAME TYPE DATA AGE
dshub-https Opaque 2 15s
yji@k8s-master:~/sec-https/secret/cert$ cd ../config
yji@k8s-master:~/sec-https/secret/config$ vim custom-nginx-config.conf
yji@k8s-master:~/sec-https/secret/config$ cat custom-nginx-config.conf
server {
listen 8080;
listen 443 ssl;
server_name www.kakao.io;
ssl_certificate certs/https.cert;
ssl_certificate_key certs/https.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
gzip on;
gzip_types text/plain application/xml;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
}
yji@k8s-master:~/sec-https/secret/config$ vi sleep-interval
yji@k8s-master:~/sec-https/secret/config$ cat sleep-interval
5
yji@k8s-master:~/sec-https/secret/config$ cd ..
yji@k8s-master:~/sec-https/secret$ ls
cert config kubetmp
yji@k8s-master:~/sec-https/secret$ kubectl create cm dshub-config --from-file=./config
configmap/dshub-config created
yji@k8s-master:~/sec-https/secret$ cd ..
yji@k8s-master:~/sec-https$ vim https-pod.yaml
yji@k8s-master:~/sec-https$ cat https-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: dshub-https
spec:
containers:
- image: dbgurum/k8s-lab:env
env:
- name: INTERVAL
valueFrom:
configMapKeyRef:
name: dshub-config
key: sleep-interval
name: html-generator
volumeMounts:
- name: html
mountPath: /var/htdocs
- image: nginx:alpine
name: web-server
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
readOnly: true
- name: config
mountPath: /etc/nginx/conf.d
readOnly: true
- name: certs
mountPath: /etc/nginx/certs/
readOnly: true
ports:
- containerPort: 80
- containerPort: 443
volumes:
- name: html
emptyDir: {}
- name: config
configMap:
name: dshub-config
items:
- key: custom-nginx-config.conf
path: https.conf
- name: certs
secret:
secretName: dshub-https
yji@k8s-master:~/sec-https$ kubectl apply -f https-pod.yaml
pod/dshub-https created
yji@k8s-master:~/sec-https$ kubectl get po
NAME READY STATUS RESTARTS AGE
dshub-https 2/2 Running 0 5s
web-deploy-595d4d579d-d278j 1/1 Running 0 21m
web-deploy-595d4d579d-wgvm9 1/1 Running 0 21m
yji@k8s-master:~/sec-https$ kubectl get po dshub-https -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
dshub-https 2/2 Running 0 12s 10.111.156.113 k8s-node1 <none> <none>
👻 1번 터미널에서 서비스 게시
yji@k8s-master:~/sec-https$ kubectl port-forward dshub-https 8443:443 &
[1] 31329
yji@k8s-master:~/sec-https$ Forwarding from 127.0.0.1:8443 -> 443
Forwarding from [::1]:8443 -> 443
👻 2번 터미널에서 접속
yji@k8s-master:~$ curl https://localhost:8443 -k -v
* Trying 127.0.0.1:8443...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: CN=*.kakao.io
* start date: Oct 7 00:12:10 2022 GMT
* expire date: Oct 2 00:12:10 2023 GMT
* issuer: CN=*.kakao.io
* SSL certificate verify result: self signed certificate (18), continuing anyway.
> GET / HTTP/1.1
> Host: localhost:8443
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Server: nginx/1.23.1
< Date: Fri, 07 Oct 2022 01:04:17 GMT
< Content-Type: text/html
< Content-Length: 100
< Last-Modified: Fri, 07 Oct 2022 01:04:13 GMT
< Connection: keep-alive
< ETag: "633f7b0d-64"
< Accept-Ranges: bytes
<
Next Friday will not be your lucky day. As a matter of fact, you don't
have a lucky day this year.
* Connection #0 to host localhost left intact
👻 1번 터미널에서 로그 확인
Handling connection for 8443 < 이거 새로 생김
👻 secret을 등록하면 TLS를 사용할 수 있다
👻 Deployment 생성
👻 Service 생성
👻 Secret 생성
yji@k8s-master:~/ing-https$ kubectl get secrets
NAME TYPE DATA AGE
account-pwd-secret Opaque 2 20h
dshub-https Opaque 2 63m
my-pwd Opaque 1 21h
sec-dev Opaque 1 21h
yji@k8s-master:~/ing-https$ openssl genrsa -out server.key 2048
Generating RSA private key, 2048 bit long modulus (2 primes)
.....................+++++
...+++++
e is 65537 (0x010001)
yji@k8s-master:~/ing-https$ openssl req -new -x509 -key server.key -out server.cert -days 360 -subj /CN=php.kakao.io,goapp.kakao.io
yji@k8s-master:~/ing-https$ kubectl create secret tls k8s-secret --cert=server.cert --key=server.key
secret/k8s-secret created
yji@k8s-master:~/ing-https$ kubectl get secrets
NAME TYPE DATA AGE
account-pwd-secret Opaque 2 20h
dshub-https Opaque 2 64m
🐣 k8s-secret kubernetes.io/tls 2 3s
my-pwd Opaque 1 21h
sec-dev Opaque 1 21h
👻 ingress 생성
yji@k8s-master:~/ing-https$ kubectl apply -f phpserver-go-ingress.yaml
ingress.networking.k8s.io/phpserver-goapp-ingress created
yji@k8s-master:~/ing-https$ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
phpserver-goapp-ingress <none> php.kakao.io,goapp.kakao.io 80, 443 4s
spec:
replicas: # 복제 수 유지
selector: # label과 매칭되는 pod가 있는지 확인 후 연결
template: # 매칭되는 pod가 없으면 template 양식(format)에 맞는 신규 pod 생성
metadata:
spec:
vim nginx-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deploy
labels:
app: myweb
spec:
replicas: 3
selector:
matchLabels:
app: myweb
template:
metadata:
labels:
app: myweb
spec:
containers:
- image: nginx:1.14
name: myweb-container
ports:
- containerPort: 80
yji@k8s-master:~/LABs/deploy$ kubectl apply -f nginx-deploy.yaml
deployment.apps/nginx-deploy created
🐣 여기서 rs가 자동생성된걸 볼 수 있다는데 어디지
yji@k8s-master:~/LABs/deploy$ kubectl get deploy,po -o wide | grep nginx
deployment.apps/nginx-deploy 3/3 3 3 2m55s myweb-container nginx:1.14 app=myweb
pod/nginx-deploy-7fccc9d87f-78bxs 1/1 Running 0 2m55s 10.109.131.5 k8s-node2 <none> <none>
pod/nginx-deploy-7fccc9d87f-pljh7 1/1 Running 0 2m55s 10.111.156.120 k8s-node1 <none> <none>
pod/nginx-deploy-7fccc9d87f-rm2bn 1/1 Running 0 2m55s 10.111.156.115 k8s-node1 <none> <none>
👻 rolling update 보기위해서
1번 터미널) kubectl get po -w (watch 걸어놓기)
+ 그리고 터미널 한 개 더 띄우기
2번 터미널 )kubectl delete po nginx-deploy-7fccc9d87f-rm2bn
이제 1번 터미널의 로그를 감상하자
> 🐣 Terminating 되면서 그 즉시 다른 파드를 생성한다 !
🐣 pod 3개를 동시에 다 날려버려도 다시 3개다 생성해준다 -> Desired
kubectl get deploy nginx-deploy -o=jsonpath='{.spec.template.spec.containers[0].image}{"\n"}'
👻 이제 nginx의 버전을 upgrade 해보자 !
(다른 터미널 열어서 kubectl get po -w 확인하기)
yji@k8s-master:~/LABs/deploy$ kubectl get deploy nginx-deploy -o=jsonpath='{.spec.template.spec.containers[0].image}{"\n"}'
nginx:1.14
yji@k8s-master:~/LABs/deploy$ kubectl set image deploy nginx-deploy myweb-container=nginx:1.17
deployment.apps/nginx-deploy image updated
yji@k8s-master:~/LABs/deploy$ kubectl get deploy nginx-deploy -o=jsonpath='{.spec.template.spec.containers[0].image}{"\n"}'
nginx:1.17
yji@k8s-master:~/LABs/deploy$ kubectl rollout history deploy nginx-deploy
deployment.apps/nginx-deploy
REVISION CHANGE-CAUSE
1 <none>
2 <none>
yji@k8s-master:~/LABs/deploy$ kubectl rollout undo deploy nginx-deploy
deployment.apps/nginx-deploy rolled back
yji@k8s-master:~/LABs/deploy$ kubectl rollout history deploy nginx-deploy
deployment.apps/nginx-deploy
REVISION CHANGE-CAUSE
2 <none>
3 <none>
🐣 1을 데려와서 3을 만들었나봐
yji@k8s-master:~/LABs/deploy$ kubectl set image deploy nginx-deploy myweb-container=nginx:1.21
deployment.apps/nginx-deploy image updated
yji@k8s-master:~/LABs/deploy$ kubectl get deploy nginx-deploy -o=jsonpath='{.spec.template.spec.containers[0].image}{"\n"}'
nginx:1.21
yji@k8s-master:~/LABs/deploy$ kubectl rollout history deploy nginx-deploy
deployment.apps/nginx-deploy
REVISION CHANGE-CAUSE
2 <none>
3 <none>
4 <none>
👻 revision number로 확인하기 !
👻 revision은 최대 10까지 가능하다
yji@k8s-master:~/LABs/deploy$ kubectl rollout history deploy nginx-deploy --revision=2
Image: nginx:1.17
yji@k8s-master:~/LABs/deploy$ kubectl rollout history deploy nginx-deploy --revision=3
Image: nginx:1.14
yji@k8s-master:~/LABs/deploy$ kubectl rollout history deploy nginx-deploy --revision=4
Image: nginx:1.21
👻 revision num으로 확인한 버전으로 돌아갈래 !
👻 undo : 직전으로 rollout하기
👻 2번 리비전으로 돌아갈래 (1.17)
yji@k8s-master:~/LABs/deploy$ kubectl rollout undo deploy nginx-deploy --to-revision=2
deployment.apps/nginx-deploy rolled back
yji@k8s-master:~/LABs/deploy$ kubectl get deploy nginx-deploy -o=jsonpath='{.spec.template.spec.containers[0].image}{"\n"}'
nginx:1.17
👻 nginx:1.23으로 업데이트
yji@k8s-master:~/LABs/deploy$ kubectl set image deploy nginx-deploy myweb-container=nginx:1.23
deployment.apps/nginx-deploy image updated
👻 버전확인
yji@k8s-master:~/LABs/deploy$ kubectl get deploy nginx-deploy -o=jsonpath='{.spec.template.spec.containers[0].image}{"\n"}'
nginx:1.23
👻 rollout history 확인
yji@k8s-master:~/LABs/deploy$ kubectl rollout history deploy nginx-deploy
deployment.apps/nginx-deploy
REVISION CHANGE-CAUSE
3 <none>
4 <none>
5 <none>
6 <none>
👻 rollout revision num으로 버전 확인
yji@k8s-master:~/LABs/deploy$ kubectl rollout history deploy nginx-deploy --revision=3 | grep Image Image: nginx:1.14
yji@k8s-master:~/LABs/deploy$ kubectl rollout history deploy nginx-deploy --revision=4 | grep Image Image: nginx:1.21
yji@k8s-master:~/LABs/deploy$ kubectl rollout history deploy nginx-deploy --revision=5 | grep Image Image: nginx:1.17
yji@k8s-master:~/LABs/deploy$ kubectl rollout history deploy nginx-deploy --revision=6 | grep Image Image: nginx:1.23
yji@k8s-master:~/LABs/deploy$ kubectl scale deploy nginx-deploy --replicas=4
deployment.apps/nginx-deploy scaled
yji@k8s-master:~/LABs/deploy$ kubectl scale deploy nginx-deploy --replicas=1
deployment.apps/nginx-deploy scaled
다시 3으로 해놓으세용
yji@k8s-master:~/LABs/deploy$ kubectl rollout history deploy nginx-deploy
deployment.apps/nginx-deploy
REVISION CHANGE-CAUSE
3 <none>
4 <none>
5 <none>
6 <none>
🐣🐣 -> revisionHistoryLimit=10
🐣🐣 -> revision이 최대 10개까지만 저장된다.
🐣🐣 만약, 바꾸고싶다면?
🐣🐣 revisionHistoryLimit: n
🐣🐣 가장 오래된 revision부터 지우면서 최대 n개까지 저장한다 !
## ngnix-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deploy
labels:
app: myweb
spec:
🐣revisionHistoryLimit: 5🐣 옵션추가 !
replicas: 3
selector:
matchLabels:
app: myweb
yji@k8s-master:~/LABs/lab-redis$ cat redisdb.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-db
labels:
app: redisdb
spec:
revisionHistoryLimit: 3
replicas: 2
selector:
matchLabels:
app: redisdb
template:
metadata:
labels:
app: redisdb
spec:
containers:
- image: redis:5
name: redisdb
yji@k8s-master:~/LABs/lab-redis$ kubectl apply -f redisdb.yaml
deployment.apps/redis-db created
// 이제 버전 바꿔야해애
yji@k8s-master:~/LABs/lab-redis$ kubectl create deployment redis-deploy --image=nginx:1.23.1-alpine --dry-run=client -o yaml > redis-deploy.yaml
yji@k8s-master:~/LABs/lab-redis$ cat redis-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: redis-deploy
name: redis-deploy
spec:
replicas: 1
selector:
matchLabels:
app: redis-deploy
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: redis-deploy
spec:
containers:
- image: nginx:1.23.1-alpine
name: nginx
resources: {}
status: {}
yji@k8s-master:~/LABs/lab-redis$ kubectl create deployment myweb88 --image=nginx:1.23.1-alpine --port=80
deployment.apps/myweb88 created
yji@k8s-master:~/LABs/lab-redis$ kubectl get deploy,po,svc | grep -i myweb
deployment.apps/myweb88 1/1 1 1 17s
pod/myweb88-75b5d55ccd-d6f5n 1/1 Running 0 17s
👻 repliacas 옵션도 추가
yji@k8s-master:~/LABs/lab-redis$ kubectl create deployment myweb88-replica --image=nginx:1.23.1-alpine --port=80 --replicas=3
deployment.apps/myweb88-replica created
yji@k8s-master:~/LABs/lab-redis$ kubectl get deploy,po,svc | grep -i myweb88-replica
deployment.apps/myweb88-replica 3/3 3 3 16s
pod/myweb88-replica-6dc4b86fbf-7gpz6 1/1 Running 0 16s
pod/myweb88-replica-6dc4b86fbf-q5bh4 1/1 Running 0 16s
pod/myweb88-replica-6dc4b86fbf-xl6wc 1/1 Running 0 16s
type은 안주면 ClusterIP다.
👻 이제 진짜 서비스 만들기 ! expose !
yji@k8s-master:~/LABs/lab-redis$ kubectl expose deploy myweb88-replica --name=myweb-svc --port=8765 --target-port=80 --dry-run=client -o yaml > myweb-svc.yaml
yji@k8s-master:~/LABs/lab-redis$ cat myweb-svc.yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: myweb88-replica
name: myweb-svc
spec:
ports:
- port: 8765
protocol: TCP
targetPort: 80
selector:
app: myweb88-replica
status:
loadBalancer: {}
👻 확인하기
yji@k8s-master:~/LABs/lab-redis$ kubectl get deploy,po,svc -o wide | grep -i myweb88-re
deployment.apps/myweb88-replica 3/3 3 3 2m57s nginx nginx:1.23.1-alpine app=myweb88-replica
pod/myweb88-replica-6dc4b86fbf-7gpz6 1/1 Running 0 2m57s 10.109.131.37 k8s-node2 <none> <none>
pod/myweb88-replica-6dc4b86fbf-q5bh4 1/1 Running 0 2m57s 10.111.156.86 k8s-node1 <none> <none>
pod/myweb88-replica-6dc4b86fbf-xl6wc 1/1 Running 0 2m57s 10.111.156.78 k8s-node1 <none> <none>
yji@k8s-master:~/LABs/lab-redis$ kubectl get deploy,po,svc -o wide | grep -i myweb88-
deployment.apps/myweb88-replica 3/3 3 3 4m58s nginx nginx:1.23.1-alpine app=myweb88-replica
pod/myweb88-75b5d55ccd-d6f5n 1/1 Running 0 5m53s 10.111.156.70 k8s-node1 <none> <none>
pod/myweb88-replica-6dc4b86fbf-7gpz6 1/1 Running 0 4m58s 10.109.131.37 k8s-node2 <none> <none>
pod/myweb88-replica-6dc4b86fbf-q5bh4 1/1 Running 0 4m58s 10.111.156.86 k8s-node1 <none> <none>
pod/myweb88-replica-6dc4b86fbf-xl6wc 1/1 Running 0 4m58s 10.111.156.78 k8s-node1 <none> <none>
service/myweb-svc ClusterIP 10.102.188.211 <none> 8765/TCP 5s app=myweb88-replica
yji@k8s-master:~/LABs/lab-redis$ curl 10.102.188.211:8765
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
yji@k8s-master:~/LABs/lab-redis$ kubectl create deployment front-end --image=nginx:1.23.1-alpine --port=80
deployment.apps/front-end created
yji@k8s-master:~/LABs/lab-redis$ kubectl expose deploy front-end --name=front-end-svc --type=NodePort --target-port=80
service/front-end-svc exposed
yji@k8s-master:~/LABs/lab-redis$ kubectl get po,deploy,svc -o wide| grep front
pod/front-end-cfc89f47f-5p7f9 1/1 Running 0 2m23s 10.109.131.35 k8s-node2 <none> <none>
deployment.apps/front-end 1/1 1 1 2m23s nginx nginx:1.23.1-alpine app=front-end
service/front-end-svc NodePort 10.103.35.90 <none> 80:31831/TCP 70s app=front-end
yji@k8s-master:~/LABs/lab-redis$ curl 192.168.56.102:31831
<!DOCTYPE html>
...
</html>
yji@k8s-master:~/LABs/lab-redis$ curl 10.103.35.90:80
<!DOCTYPE html>
...
</html>
나) 컨테이너 이름을 바꿀라함. 아까 실습생각해서 ,,
사진 보고 넣어 ~ ~
kubectl set image deploy nginx-deploy myweb-container=nginx:1.17
kubectl create deployment nginx-app --
kubectl create deployment kual100201 --image=nginx --replicas=7 labels=app_runtime_stage=dev
나) 근데 kubectl create deployment --h 했을 때 label 옵션 없었음
그러면
kubectl create deploy kual100201 --image=nginx --dry-run=client -o yaml > /opt/KUAL00201/spec_deployment.yaml
vim /opt/KUAL00201/spec_deployment.yaml
로 label 박아주세용 + replicas
deployement 만들어서 service를 붙이고. nslookup 해 봐
kubectl create deployment nginx-random --image=nginx --port=80 --replicas=3
kubectl expose deploy nginx-random --target-port=80 --port=9999
kubectl get deploy,po,svc -o wide
kubectl run nginx-random --image=busybox --rm -it -- nslookup 10.111.156.94
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
kubectl edit deployment -n kube-system metrics-server
mkdir hpa && cd $_
vim hpa_cpu50.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hpa-cpu50
labels:
resource: cpu
spec:
replicas: 2
selector:
matchLabels:
resource: cpu
template:
metadata:
labels:
resource: cpu
spec:
containers:
- image: dbgurum/k8s-lab:hpa
name: hpa-cpu50
ports:
- containerPort: 8080
resources:
requests:
cpu: 10m
limits:
cpu: 20m
---
apiVersion: v1
kind: Service
metadata:
name: hpa-svc
spec:
selector:
resource: cpu
ports:
- port: 8080
targetPort: 8080
nodePort: 30008
type: NodePort
yji@k8s-master:~/LABs/hpa$ kubectl apply -f hpa-cpu50.yaml
deployment.apps/hpa-cpu50 created
service/hpa-svc created
# kubectl expose deploy hpa-cpu50 --name=hpa-cpu50-svc --port=8080 --targetPort=8080 --type=NodePort
yji@k8s-master:~/LABs/hpa$ kubectl get deploy,po,svc -o wide | grep hpa
deployment.apps/hpa-cpu50 2/2 2 2 79s hpa-cpu50 dbgurum/k8s-lab:hpa resource=cpu
pod/hpa-cpu50-8676c77c77-6dbmv 1/1 Running 0 79s 10.111.156.95 k8s-node1 <none> <none>
pod/hpa-cpu50-8676c77c77-j8tq5 1/1 Running 0 79s 10.109.131.38 k8s-node2 <none> <none>
service/hpa-svc NodePort 10.108.185.15 <none> 8080:30008/TCP 79s resource=cpu
👻 describe로 내가 작성한 부분이 잘 적용되었나 확인하기
kubectl describe deployment.apps hpa-cpu50
Limits:
cpu: 20m
Requests:
cpu: 10m
--- 놓침
kubectl get hpa -w
kubectl get doploy,po,svc -o wide | grep hpa
while true; do curl 192.168.56.101:30008/hostname; sleep 0.05; done
변동하는 것을 볼 수 있음 + kubectl get hpa -w로 변하는거 보기
kubectl autoscale deploy webapp --min=10 --max=20 --cpu percent=85 kubectl get hpa kubectl get pod -l app=webapp
정신차려어어
wordpress -> nodePort로 뽑아서 크롬과 연결 가능하게
👻
vim web-db-pv1.yaml
vim web-db-pv2.yaml
yji@k8s-master:~/LABs/web-db$ kubectl apply -f web-db-pv1.yaml
persistentvolume/web-db-pv1 created
yji@k8s-master:~/LABs/web-db$ kubectl apply -f web-db-pv2.yaml
persistentvolume/web-db-pv2 created
yji@k8s-master:~/LABs/web-db$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
web-db-pv1 10Gi RWO Retain Released default/pvc3 15m
web-db-pv2 10Gi RWO Retain Released default/wordpress-pvc
yji@k8s-master:~/LABs/web-db$ vi wordpress-pvc.yaml
yji@k8s-master:~/LABs/web-db$ vi mysql-pvc.yaml
yji@k8s-master:~/LABs/web-db$ kubectl apply -f wordpress-pvc.yaml
persistentvolumeclaim/wordpress-pvc created
yji@k8s-master:~/LABs/web-db$ kubectl apply -f mysql-pvc.yaml
persistentvolumeclaim/mysql-pvc created
yji@k8s-master:~/LABs/web-db$ kubectl get pvc,pv
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/mysql-pvc Bound web-db-pv2 10Gi RWO 107s
persistentvolumeclaim/wordpress-pvc Bound web-db-pv1 10Gi RWO 113s
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/web-db-pv1 10Gi RWO Retain Bound default/wordpress-pvc 14s
persistentvolume/web-db-pv2 10Gi RWO Retain Bound default/mysql-pvc 12s
👻 secret object 생성 mysql 암호저장용
yji@k8s-master:~/LABs/web-db$ kubectl create secret generic mysql-pwd --from-literal=password=password
secret/mysql-pwd created
yji@k8s-master:~/LABs/web-db$ kubectl describe secret mysql-pwd
Name: mysql-pwd
====
password: 8 bytes
yji@k8s-master:~/LABs/web-db$ vi mysql-deploy.yaml
yji@k8s-master:~/LABs/web-db$ kubectl apply -f mysql-deploy.yaml
deployment.apps/mysql created
yji@k8s-master:~/LABs/web-db$ kubectl get deploy,pods -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/mysql 1/1 1 1 8s mysql mysql:5.6 app=mysql
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/mysql-68bb76b976-9k72l 1/1 Running 0 8s 10.111.156.81 k8s-node1 <none> <none>
yji@k8s-master:~/LABs/web-db$ vim mysql-service.yaml
yji@k8s-master:~/LABs/web-db$ kubectl apply -f mysql-service.yaml
service/mysql created
yji@k8s-master:~/LABs/web-db$ kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d <none>
mysql ClusterIP 10.102.41.80 <none> 3306/TCP 7s app=mysql
yji@k8s-master:~/LABs/web-db$ kubectl apply -f wordpress-deploy.yaml
deployment.apps/wordpress created
yji@k8s-master:~/LABs/web-db$ kubectl get deploy,po -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/mysql 1/1 1 1 10m mysql mysql:5.6 app=mysql
deployment.apps/wordpress 1/1 1 1 2m3s wordpress wordpress app=wordpress
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/mysql-68bb76b976-9k72l 1/1 Running 0 10m 10.111.156.81 k8s-node1 <none> <none>
pod/wordpress-b56cfb79-hspr8 1/1 Running 0 2m2s 10.109.131.44 k8s-node2 <none> <none>
yji@k8s-master:~/LABs/web-db$ vim wordpress-service.yaml
yji@k8s-master:~/LABs/web-db$ kubectl apply -f wordpress-service.yaml
service/wordpress created
yji@k8s-master:~/LABs/web-db$ kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d <none>
mysql ClusterIP 10.102.41.80 <none> 3306/TCP 10m app=mysql
wordpress NodePort 10.109.69.218 <none> 80:31859/TCP 17s app=wordpress
yji@k8s-master:~/LABs/web-db$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mysql-68bb76b976-9k72l 1/1 Running 0 12m 10.111.156.81 k8s-node1 <none> <none>
wordpress-b56cfb79-hspr8 1/1 Running 0 3m49s 10.109.131.44 k8s-node2 <none> <none>
yji@k8s-master:~/LABs/web-db$ kubectl get pod/wordpress-b56cfb79-hspr8 -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
wordpress-b56cfb79-hspr8 1/1 Running 0 4m10s 10.109.131.44 k8s-node2 <none> <none>
yji@k8s-master:~/LABs/web-db$ kubectl describe pod/wordpress-b56cfb79-hspr8
Name: wordpress-b56cfb79-hspr8
Namespace: default
Priority: 0
Node: k8s-node2/192.168.56.102
Start Time: Fri, 07 Oct 2022 15:52:58 +0900
Labels: app=wordpress
pod-template-hash=b56cfb79
Annotations: cni.projectcalico.org/containerID: 3512e58e7047dc92e20a66fef60fe0dfbb87ba8b0fe403a5c91bca5849d02e4f
cni.projectcalico.org/podIP: 10.109.131.44/32
cni.projectcalico.org/podIPs: 10.109.131.44/32
Status: Running
IP: 10.109.131.44
IPs:
IP: 10.109.131.44
Controlled By: ReplicaSet/wordpress-b56cfb79
Containers:
wordpress:
Container ID: containerd://e93581fb995341c4ec6be7ead733620dd693e34519e9b691f86b577420dd3b44
Image: wordpress
Image ID: docker.io/library/wordpress@sha256:6003ce1cc14ed9d83c3df5593b3359cc8e4236fb80238e9530a0beeb6c60f688
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 07 Oct 2022 15:54:01 +0900
Ready: True
Restart Count: 0
Environment:
WORDPRESS_DB_HOST: mysql:3306
WORDPRESS_DB_NAME: kube-db
WORDPRESS_DB_USER: kube-user
WORDPRESS_DB_PASSWORD: password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fwbc4 (ro)
/var/www/html from wordpress-persistent-storage (rw)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
wordpress-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: wordpress-pvc
ReadOnly: false
kube-api-access-fwbc4:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m30s default-scheduler Successfully assigned default/wordpress-b56cfb79-hspr8 to k8s-node2
Normal Pulling 4m29s kubelet Pulling image "wordpress"
Normal Pulled 3m31s kubelet Successfully pulled image "wordpress" in 58.623998764s
Normal Created 3m31s kubelet Created container wordpress
Normal Started 3m27s kubelet Started container wordpress
yji@k8s-master:~/LABs/web-db$ kubectl exec -it mysql-68bb76b976-9k72l -- bash
root@mysql-68bb76b976-9k72l:/# env | grep MYSQL
MYSQL_PASSWORD=password
MYSQL_DATABASE=kube-db
MYSQL_ROOT_PASSWORD=password
MYSQL_MAJOR=5.6
MYSQL_USER=kube-user
MYSQL_VERSION=5.6.51-1debian9
MYSQL_ROOT_HOST=%
root@mysql-68bb76b976-9k72l:/# mysql -uroot -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.6.51 MySQL Community Server (GPL)
Copyright (c) 2000, 2021, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> show tables;
Empty set (0.01 sec)
👻 크롬에서 접속 http://192.168.56.102:31859/
👻 wordpress 사이트에서 install 이후 테이블이 생성된다.
mysql> show tables;
+-----------------------+
| Tables_in_kube-db |
+-----------------------+
| wp_commentmeta |
| wp_comments |
| wp_links |
| wp_options |
| wp_postmeta |
| wp_posts |
| wp_term_relationships |
| wp_term_taxonomy |
| wp_termmeta |
| wp_terms |
| wp_usermeta |
| wp_users |
+-----------------------+
12 rows in set (0.00 sec)
클라우드환경에서는 pvc만 있으면 ㅇㅇ ?
⭐ 📘 📗 💭 🤔 📕 📔 🐳 ✍ 🥳 ⭐ 🐣 👻
kubectl get deploy nginx-deploy -o=jsonpath='{.spec.template.spec.containers[0].image}{"\n"}'
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/