PVC만들면 바운딩될 PV를 자동으로 만듬 PV가 실제로 연결
어떤 근거로? 템플릿이 있음 -> 스토리지 클래스 SC
관리자가 SC만들어둠 사용자는 PVC를 만들 때 스토리지 클래스를 지정
https://rook.io/
쿠버네티스를 위한 스토리지 제공
ROOK 오픈소스
사전요구사항이 있음
실제 스토리지 구현이라 사용할 디스크가 있어야함
Vagrantfile
# Define VM
config.vm.define "k8s-node1" do |centos|
centos.vm.box = "ubuntu/focal64"
centos.vm.hostname = "k8s-node1"
centos.vm.network "private_network", ip: "192.168.100.100"
centos.vm.provider "virtualbox" do |vb|
vb.name = "k8s-node1"
vb.cpus = 2
vb.memory = 4000
unless File.exist?('./.disk/ceph1.vdi') # disk가 없다면 생성
vb.customize ['createmedium', 'disk', '--filename', './.disk/ceph1.vdi', '--size', 10240]
end
vb.customize ['storageattach', :id, '--storagectl', 'SCSI', '--port', 2, '--device', 0, '--type', 'hdd', '--medium', './.disk/ceph1.vdi']
end
end
up 하기전에
vagrant snapshot save before-rook
스냅샷 만들기
스냅샷 복구vagrant restore save before-rook
별도의 오픈소스
https://ceph.io/en/
파일, 블록, 오브젝트 스토리지 제공 by ceph -> Intergrated storage 통합 스토리지
https://rook.io/docs/rook/latest/Getting-Started/quickstart/
git clone --single-branch --branch v1.9.3 https://github.com/rook/rook.git
master버전은 개발중 버전이라 위험성이 있음
cd rook/deploy/examples 필요한 오브젝트들이 다 있음 yaml수정해야하지만 테스트니까 그냥함
rook-ceph namespace가 만들어짐
vagrant@k8s-node1 ~/rook/deploy/examples ➦ 4a9078091 kubectl get ns
NAME STATUS AGE
...
rook-ceph Active 27s
operator -> rook-ceph를 설치하기 위한 파드 도와주는 파드이름중에 오퍼레이터 이름 붙으면 메인앱이 아니라 sw설치하기위한 sw
vagrant@k8s-node1 ~/rook/deploy/examples ➦ 4a9078091 kubectl get po -n rook-ceph
NAME READY STATUS RESTARTS AGE
rook-ceph-operator-7c89b8d585-rm7mk 0/1 ContainerCreating 0 38s
러닝상태로 완료되어야함
vagrant@k8s-node1 ~/rook/deploy/examples ➦ 4a9078091 kubectl get po -n rook-ceph
NAME READY STATUS RESTARTS AGE
rook-ceph-operator-7c89b8d585-rm7mk 1/1 Running 0 3m45s
러닝상태가되고
kubectl create -f cluster.yaml 실제 rook-ceph를 설치
cluster.yaml: 노드 3개이상
cluster-on-pvc.yaml: aws gcp같은 클라우드 환경
cluster-test.yaml: 하나만 잇을때 노드가
kubectl -n rook-ceph get pod
실행 시
오퍼레이트에 의해 파드들이 생성이 되는 파드 확인 가능
mon abc 3개 osd 012 3개 mgr 1개 있어야함
NAME READY STATUS RESTARTS AGE
csi-cephfsplugin-provisioner-d77bb49c6-n5tgs 5/5 Running 0 140s
csi-cephfsplugin-provisioner-d77bb49c6-v9rvn 5/5 Running 0 140s
csi-cephfsplugin-rthrp 3/3 Running 0 140s
csi-rbdplugin-hbsm7 3/3 Running 0 140s
csi-rbdplugin-provisioner-5b5cd64fd-nvk6c 6/6 Running 0 140s
csi-rbdplugin-provisioner-5b5cd64fd-q7bxl 6/6 Running 0 140s
rook-ceph-crashcollector-minikube-5b57b7c5d4-hfldl 1/1 Running 0 105s
rook-ceph-mgr-a-64cd7cdf54-j8b5p 1/1 Running 0 77s
rook-ceph-mon-a-694bb7987d-fp9w7 1/1 Running 0 105s
rook-ceph-mon-b-856fdd5cb9-5h2qk 1/1 Running 0 94s
rook-ceph-mon-c-57545897fc-j576h 1/1 Running 0 85s
rook-ceph-operator-85f5b946bd-s8grz 1/1 Running 0 92m
rook-ceph-osd-0-6bb747b6c5-lnvb6 1/1 Running 0 23s
rook-ceph-osd-1-7f67f9646d-44p7v 1/1 Running 0 24s
rook-ceph-osd-2-6cd4b776ff-v4d68 1/1 Running 0 25s
rook-ceph-osd-prepare-node1-vx2rz 0/2 Completed 0 60s
rook-ceph-osd-prepare-node2-ab3fd 0/2 Completed 0 60s
rook-ceph-osd-prepare-node3-w4xyz 0/2 Completed 0 60s
..... 컴퓨터 CPU가 안좋아져서 rook은 포기...ㅋㅋㅋㅋ
스토리지 클래스를 보면
https://kubernetes.io/ko/docs/concepts/storage/storage-classes/#%ED%94%84%EB%A1%9C%EB%B9%84%EC%A0%80%EB%84%88
다이나믹 프로비저닝 지원하는 프로비저너 표시
Ganesha 별도 설정필요 -> subdir 로
https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
git clone https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner.git
cd /nfs-subdir-external-provisioner/deploy
kubectl create -f rbac.yaml
deployment.yaml
24 image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 # nfs 서비스를 제공하는 파드
28 env:
29 - name: PROVISIONER_NAME
30 value: k8s-sigs.io/nfs-subdir-external-provisioner
31 - name: NFS_SERVER
32 value: 192.168.100.100
33 - name: NFS_PATH
34 value: /nfsvolume # 실제 nfs서버
35 volumes:
36 - name: nfs-client-root
37 nfs:
38 server: 192.168.100.100
39 path: /nfsvolume
kubectl create -f deployment.yaml
kubectl create -f class.yaml # 사용할 스토리지 클래스
클라우드가 아니기 때문에 VOLUMEBINDINGMODE = Immediate
vagrant@k8s-node1 ~/nfs-subdir-external-provisioner/deploy master ± kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-client k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 76s
mypvc.yaml
vi mypvc.yaml
metadata:
name: mypvc-dynamic
...
storageClassName: 'nfs-client' # 스토리지 클래스 이름 지정
kubectl get pv,pvc
sudo ls /nfsvolume 해서 보면 디렉토리가 만들어져있음
NS-pvc-pv 데이터는 이 디렉토리에 만들어짐
class.yaml
에 annotations 추가
annotations:
storageclass.kubernetes.io/is-default-class: true
kubectl create -f class.yaml
annotation활성화
kubectl annotate sc nfs-client storageclass.kubernetes.io/is-default-class="true"
디폴트라는 이름이 붙음(이름뒤에 디폴트붙음)
annotation 비활성화
kubectl annotate sc nfs-client storageclass.kubernetes.io/is-default-class-
-
를 맨뒤에 붙임
mypvc.yaml
에 storageClassName
를 지정하지 않아도 default로 지정되어 있기 때문에 동적 프로비저닝 작동
vi mypvc.yaml
metadata:
name: mypvc-dynamic
...
#storageClassName: 'nfs-client' # 스토리지 클래스 이름 지정
vagrant@k8s-node1 ~ kubectl api-resources | grep deployment
deployments deploy apps/v1 true Deployment
vagrant@k8s-node1 ~ kubectl explain deploy.spec
KIND: Deployment
VERSION: apps/v1
RESOURCE: spec <Object>
FIELDS:
replicas <integer>
정수형태의 복제본개수
revisionHistoryLimit <integer>
The number of old ReplicaSets to retain to allow rollback. This is a
pointer to distinguish between explicit zero and not specified. Defaults to
10.
selector <Object> -required-
selector 셀렉터 매치 익스펜션 매치레이블스
strategy <Object>
The deployment strategy to use to replace existing pods with new ones.
template <Object> -required-
Template describes the pods that will be created.
apiVersion: apps/v1
kind: Deployment
metadata:
name: myweb-deploy
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: myweb
image: ghcr.io/c1t1d0s7/go-myweb:v1
ports:
- containerPort: 8080
deploy가 RS를 만들고 RS가 파드를 만듬 우리는 Deploy만 선언 RS도만들고 파드도 만드는 구성
확인할 수 있는 리소스
Valid resource types include:
* deployments
* daemonsets
* statefulsets
Available Commands:
history View rollout history
pause Mark the provided resource as paused
restart Restart a resource
resume Resume a paused resource
status Show the status of the rollout
undo Undo a previous rollout
apiVersion: v1
kind: Service
metadata:
name: myweb-svc-lb
spec:
type: LoadBalancer
selector:
app: web
ports:
- port: 80
targetPort: 8080
kubectl rollout status deploy myweb-deploy
deployment "myweb-deploy" successfully rolled out
kubectl rollout history deploy myweb-deploy
REVISION CHANGE-CAUSE
1 <none>
yaml 수정, edit, patch말고 추가로 이미지를 변경할 수 있는 방법
kubectl set image deployments myweb-deploy myweb=ghcr.i
o/c1t1d0s7/go-myweb:v2.0 --record
--record
: 명령을 히스토리에 저장
set말고 yaml파일을 수정하게 되면 --record를 사용하지 못해 무슨 이유로 rollout이 되었는지 확인 할 수 없음 --> annotations을 사용하여 기록
apiVersion: apps/v1
kind: Deployment
metadata:
name: myweb-deploy
annotations:
kubernetes.io/change-cause: "Change Go Myweb version from 3 to 4"
...
kubectl apply -f myweb-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myweb-deploy
annotations:
kubernetes.io/change-cause: "Change Go Myweb version from 3 to 4"
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: myweb
image: ghcr.io/c1t1d0s7/go-myweb:v4.0
ports:
- containerPort: 8080
전환과정에서 status하면(ex undo하고 바로) 변환하는 과정을 볼 수가 있음
undo --to-revision=번호
이동가능
pods.spec.containers.env
apiVersion: v1
kind: Pod
metadata:
name: myweb-env
spec:
containers:
- name: myweb
image: ghcr.io/c1t1d0s7/go-myweb:alpine
env:
- name: MESSAGE
value: "Customized Hello World"
키-값 쌍으로 기밀이 아닌 데이터를 저장하는 데 사용하는 API 오브젝트
사용 용도:
mymessage.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mymessage
data:
MESSAGE: Customized Hello ConfigMap
myweb-env.yaml
apiVersion: v1
kind: Pod
metadata:
name: myweb-env
spec:
containers:
- name: myweb
image: ghcr.io/c1t1d0s7/go-myweb:alpine
envFrom:
- configMapRef:
name: mymessage
apiVersion: v1
kind: Pod
metadata:
name: myweb-env
spec:
containers:
- name: myweb
image: ghcr.io/c1t1d0s7/go-myweb:alpine
env:
valueFrom:
configMapKeyRef:
name: mymessage
key: MESSAGE
myweb-cm-vol.yaml
apiVersion: v1
kind: Pod
metadata:
name: myweb-cm-vol
spec:
containers:
- name: myweb
image: ghcr.io/c1t1d0s7/go-myweb:alpine
volumeMounts:
- name: cmvol
mountPath: /myvol
volumes:
- name: cmvol
configMap:
name: mymessage
value --base64--> encoded data
암호화가 X
Hashicorp Vault
AWS KMS 연동해서사용
Opaque -> 임의의 사용자 정의 데이터
mydata.yaml
apiVersion: v1
kind: Secret
metadata:
name: mydata
type: Opaque
data:
id: YWRtaW4K # base64 명령어를 통해 인코딩 된 값
pwd: UEBzc3cwcmQK
apiVersion: v1
kind: Pod
metadata:
name: myweb-secret
spec:
containers:
- name: myweb
image: ghcr.io/c1t1d0s7/go-myweb:alpine
envFrom:
- secretRef:
name: mydata
apiVersion: v1
kind: Pod
metadata:
name: myweb-env
spec:
containers:
- name: myweb
image: ghcr.io/c1t1d0s7/go-myweb:alpine
env:
valueFrom:
secretKeyRef:
name: mydata
key: id
apiVersion: v1
kind: Pod
metadata:
name: myweb-sec-vol
spec:
containers:
- name: myweb
image: ghcr.io/c1t1d0s7/go-myweb:alpine
volumeMounts:
- name: secvol
mountPath: /secvol
volumes:
- name: secvol
secret:
secretName: mydata
Nginx
Secret:
kubernetes.io/tls
mkdir x509 && cd x509
Private Key
openssl genrsa -out nginx-tls.key 2048
Public Key
openssl rsa -in nginx-tls.key -pubout -out nginx-tls
CSR
openssl req -new -key nginx-tls.key -out nginx-tls.csr
인증서
openssl req -x509 -days 3650 -key nginx-tls.key -in nginx-tls.csr -out nginx-tls.crt
rm nginx-tls nginx-tls.csr
ConfigMap
mkdir conf && cd conf
nginx-tls.conf
server {
listen 80;
listen 443 ssl;
server_name myapp.example.com;
ssl_certificate /etc/nginx/ssl/tls.crt;
ssl_certificate_key /etc/nginx/ssl/tls.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
location / {
root /usr/share/nginx/html;
index index.html;
}
}
CM 생성
nginx-tls-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-tls-config
data:
nginx-tls.conf: |
server {
listen 80;
listen 443 ssl;
server_name myapp.example.com;
ssl_certificate /etc/nginx/ssl/tls.crt;
ssl_certificate_key /etc/nginx/ssl/tls.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
location / {
root /usr/share/nginx/html;
index index.html;
}
}
Secret 생성
nginx-tls-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: nginx-tls-secret
type: kubernetes.io/tls
data:
# base64 x509/nginx-tls.crt -w 0
tls.crt: |
LS0tLS1C...
# base64 x509/nginx-tls.key -w 0
tls.key: |
LS0tLS1C...
Pod 생성
nginx-https-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-https-pod
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/conf.d
- name: nginx-certs
mountPath: /etc/nginx/ssl
volumes:
- name: nginx-config
configMap:
name: nginx-tls-config
- name: nginx-certs
secret:
secretName: nginx-tls-secret
SVC 생성
nginx-svc-lb.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-svc-lb
spec:
type: LoadBalancer
selector:
app: nginx
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
Test
curl -k https://192.168.100.X
-k --insecure https 사이트를 SSL certificate 검증없이 연결