[CloudaNet-Cilium-Study[1기]] 4주차 - Svc LB-IPAM

진웅·2025년 8월 10일

CILIUM

목록 보기
7/14

Service LB-IPAM

cilium 은 External-IP 를 LoadBalancing 하는 기능을 가지고있다.

1. cilium ippool을 생성해서 ex-ip 범위 설정 해야함

ciliumloadbalancerippool.cilium.io/cilium-lb-ippool created
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get CiliumLoadBalancerIPPool -A
NAME DISABLED CONFLICTING IPS AVAILABLE AGE

cilium-lb-ippool   false      False         5               8s

webpod 서비스를 LoadBalancer Type 변경 설정

(⎈|HomeLab:N/A) root@k8s-ctr:~# k get svc webpod

NAME     TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
webpod   ClusterIP   10.96.17.34   <none>        80/TCP    15h

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl patch svc webpod -p '{"spec":{"type":"LoadBalancer"}}'

service/webpod patched

(⎈|HomeLab:N/A) root@k8s-ctr:~# k get svc webpod

NAME     TYPE           CLUSTER-IP    EXTERNAL-IP      PORT(S)        AGE
webpod   LoadBalancer   10.96.17.34   192.168.10.211   80:30744/TCP   15h

K8s 노드에서 호출 확인

⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc webpod -o jsonpath='{.status.loadBalancer.ingress[0].ip}'

192.168.10.211

(⎈|HomeLab:N/A) root@k8s-ctr:~# LBIP=$(kubectl get svc webpod -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
(⎈|HomeLab:N/A) root@k8s-ctr:~# curl -s $LBIP

Hostname: webpod-697b545f57-f9zp6
IP: 127.0.0.1
IP: ::1
IP: 172.20.0.37
IP: fe80::34fc:43ff:fe51:6cf4
RemoteAddr: 172.20.0.229:44422
GET / HTTP/1.1
Host: 192.168.10.211
User-Agent: curl/8.5.0
Accept: */*

K8s 외부 노드 roter에서 호출 확인

root@router:/tmp# LBIP=192.168.10.211
root@router:/tmp# curl --connect-timeout 1 $LBIP

curl: (28) Failed to connect to 192.168.10.211 port 80 after 1002 ms: Timeout was reached
root@router:/tmp#  arping -i eth1 $LBIP -c 1
ARPING 192.168.10.211
Timeout
--- 192.168.10.211 statistics ---
1 packets transmitted, 0 packets received, 100% unanswered (0 extra)

root@router:/tmp# arp -a

? (192.168.10.211) at <incomplete> on eth1

Cilium L2 Announement 를 이용해서 cilium 에 설정한 lb ip mac 주소 등록

설정 업그레이드

helm upgrade cilium cilium/cilium --namespace kube-system --version 1.18.0 --reuse-values \
--set l2announcements.enabled=true && watch -d kubectl get pod -A

kubectl rollout restart -n kube-system ds/cilium

확인

kubectl -n kube-system exec ds/cilium -c cilium-agent -- cilium-dbg config --all | grep EnableL2Announcements

EnableL2Announcements             : true

cilium config view | grep enable-l2

enable-l2-announcements                           true
enable-l2-neigh-discovery                         true

정책 설정 : arp 광고하게 될 service 와 node 지정(controlplane 제외) -> 설정 직후 arping 확인!

제약사항 : L2 ARP 모드에서 LB IPPool 은 같은 네트워크 대역에서만 유효. -> k8s-w0 을 제외한 이유. 포함 시 리더 노드 선정 시 동작 실패 상황 발생!

cat << EOF | kubectl apply -f -
apiVersion: "cilium.io/v2alpha1"  # not v2
kind: CiliumL2AnnouncementPolicy
metadata:
  name: policy1
spec:
  serviceSelector:
    matchLabels:
      app: webpod
  nodeSelector:
    matchExpressions:
      - key: kubernetes.io/hostname
        operator: NotIn
        values:
          - k8s-w0
  interfaces:
  - ^eth[1-9]+
  externalIPs: true
  loadBalancerIPs: true
EOF

router 응답 확인

root@router:~# arping -i eth1 $LBIP -c 100000

ARPING 192.168.10.211
60 bytes from 08:00:27:d4:e9:20 (192.168.10.211): index=0 time=379.731 usec
60 bytes from 08:00:27:d4:e9:20 (192.168.10.211): index=1 time=998.079 msec
60 bytes from 08:00:27:d4:e9:20 (192.168.10.211): index=2 time=286.644 usec
60 bytes from 08:00:27:d4:e9:20 (192.168.10.211): index=3 time=999.340 msec
60 bytes from 08:00:27:d4:e9:20 (192.168.10.211): index=4 time=198.641 usec
60 bytes from 08:00:27:d4:e9:20 (192.168.10.211): index=5 time=165.457 usec

누가 webpod ex-svc 리더인가 확인

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system get lease | grep "cilium-l2announce"

cilium-l2announce-default-webpod       k8s-w1                                                                      9m15s

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system get lease/cilium-l2announce-default-webpod -o yaml | yq

{
  "apiVersion": "coordination.k8s.io/v1",
  "kind": "Lease",
  "metadata": {
    "creationTimestamp": "2025-08-10T07:33:15Z",
    "name": "cilium-l2announce-default-webpod",
    "namespace": "kube-system",
    "resourceVersion": "67384",
    "uid": "9995768f-80a1-4cb0-804e-d215bd7cb45c"
  },
  "spec": {
    "acquireTime": "2025-08-10T07:33:15.723486Z",
    "holderIdentity": "k8s-w1",
    "leaseDurationSeconds": 15,
    "leaseTransitions": 0,
    "renewTime": "2025-08-10T07:43:59.276353Z"
  }
}
  • "holderIdentity": "k8s-w1" 로 확인

(⎈|HomeLab:N/A) root@k8s-ctr:~# export CILIUMPOD1=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=k8s-w1 -o jsonpath='{.items[0].metadata.name}')
(⎈|HomeLab:N/A) root@k8s-ctr:~# echo $CILIUMPOD1
cilium-6jvrp
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -n kube-system $CILIUMPOD1 -- cilium-dbg shell -- db/show l2-announce

IP               NetworkInterface
192.168.10.211   eth1
  • w1 에 pod 내부에 추가된 et1 확인!

외부 Node 에서 테스트

root@router:~# while true; do curl -s --connect-timeout 1 $LBIP | grep RemoteAddr; sleep 0.1; done

RemoteAddr: 172.20.1.149:35294
RemoteAddr: 192.168.10.200:35300
RemoteAddr: 192.168.10.200:35316
RemoteAddr: 172.20.1.149:35326
RemoteAddr: 192.168.10.200:35328
RemoteAddr: 192.168.10.200:35338
RemoteAddr: 172.20.1.149:35342

잘 되는데 RemoteAddr 이 계속 바뀐다. arp 한계

장애 테스트 leader w1 노드 죽이고 상태보기

shpass -p 'vagrant' ssh vagrant@k8s-w1 sudo reboot

  • 결과 몇초 끊기긴한다.. 캡처 못함.
profile
bytebliss

0개의 댓글