Networking Model
- 모든 Pod는 IP 주소를 가지고 있어야 한다.
- 모든 Pod는 같은 노드 안에서 다른 Pod와 통신할 수 있어야 한다.
- 모든 Pod는 같은 IP주소를 통해 NAT없이 다른 노드의 모든 파드와 통신할 수 있어야 한다.
- IP와 Subnet이 어디에 속해있는지는 상관이 없다.
# 각 노드의 IP 확인 가능(INTERNAL-IP)
controlplane ~ ➜ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
controlplane Ready control-plane 3m25s v1.31.0 192.168.122.177 <none> Ubuntu 22.04.5 LTS 5.15.0-1071-gcp containerd://1.6.26
node01 Ready <none> 2m54s v1.31.0 192.168.212.136 <none> Ubuntu 22.04.4 LTS 5.15.0-1075-gcp containerd://1.6.26
controlplane ~ ➜ ip addr
3: eth0@if89112: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1410 qdisc noqueue state UP group default
link/ether d6:a5:3e:e8:94:47 brd ff:ff:ff:ff:ff:ff link-netnsid 0 # MAC 주소
inet 192.168.122.177/32 scope global eth0 # IP
valid_lft forever preferred_lft forever
inet6 fe80::d4a5:3eff:fee8:9447/64 scope link
valid_lft forever preferred_lft foreve
- host range 확인
6699: eth0@if6700: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:c0:06:fb:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.6.251.3/24 brd 192.6.251.255 scope global eth0
valid_lft forever preferred_lft forever
# 192.6.251.0/24 : 나머지 8bit는 host address를 위해 사용된다.
- Interface/bridge 확인
- 컨테이너 환경에서 브리지 네트워크는 여러 컨테이너가 동일한 호스트 내에서 서로 통신할 수 있도록 가상 네트워크 인터페이스를 연결하는 방식이다. 이 방식에서는 각 컨테이너가 가상의 이더넷 인터페이스를 통해 브리지 네트워크에 연결되고 이 네트워크를 통해 통신한다.
- 브리지 네트워크는 일반적으로 단일 호스트 내의 컨테이너 간 통신에 사용된다.
controlplane ~ ➜ ip addr show type bridge
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1360 qdisc noqueue state UP group default qlen 1000
link/ether aa:ed:47:51:b1:e6 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/24 brd 172.17.0.255 scope global cni0
valid_lft forever preferred_lft forever
inet6 fe80::a8ed:47ff:fe51:b1e6/64 scope link
valid_lft forever preferred_lft forever
# cni0 사용중, state UP 상태
- ip route
controlplane ~ ➜ ip route
default via 169.254.1.1 dev eth0
169.254.1.1 dev eth0 scope link
172.17.0.0/24 dev cni0 proto kernel scope link src 172.17.0.1
172.17.1.0/24 via 172.17.1.0 dev flannel.1 onlink
# 구글 같은 사이트로 ping을 날리면, default route를 사용한다.
- Control-plane에서 사용되고 있는 Port 확인
controlplane ~ ➜ netstat -npl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 578/systemd-resolve
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 3251/etcd
tcp 0 0 127.0.0.1:2381 0.0.0.0:* LISTEN 3251/etcd
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 973/ttyd
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 4629/kube-proxy
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 4203/kubelet
tcp 0 0 127.0.0.1:10259 0.0.0.0:* LISTEN 3618/kube-scheduler
tcp 0 0 127.0.0.1:10257 0.0.0.0:* LISTEN 3609/kube-controlle
controlplane ~ ➜ netstat -npl | grep -i scheduler
tcp 0 0 127.0.0.1:10259 0.0.0.0:* LISTEN 3618/kube-scheduler
- etcd가 listening하고있는 Port 확인: 2379/2381/2380
controlplane ~ ➜ netstat -npa | grep -etcd
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 3251/etcd
tcp 0 0 127.0.0.1:2381 0.0.0.0:* LISTEN 3251/etcd
tcp 0 0 192.168.122.177:2379 0.0.0.0:* LISTEN 3251/etcd
tcp 0 0 192.168.122.177:2380 0.0.0.0:* LISTEN 3251/etcd
tcp 0 0 127.0.0.1:2379 127.0.0.1:56598 ESTABLISHED 3251/etcd
tcp 0 0 127.0.0.1:2379 127.0.0.1:56712 ESTABLISHED 3251/etcd
tcp 0 0 127.0.0.1:2379 127.0.0.1:56552 ESTABLISHED 3251/etcd
tcp 0 0 127.0.0.1:2379 127.0.0.1:56564 ESTABLISHED 3251/etcd
tcp 0 0 127.0.0.1:2379 127.0.0.1:56834 ESTABLISHED 3251/etcd
tcp 0 0 127.0.0.1:2379 127.0.0.1:57046 ESTABLISHED 3251/etcd
# 2379: 모든 control plane components가 연결하고 있는 etcd의 Port
# 2380: 오로지 etcd peer-to-peer connectivity 를 위한 것
- Container Runtime: 컨테이너를 실제로 실행하고 관리하는 소프트웨어
controlplane ~ ➜ ps -aux | grep -i kubelet | grep container-runtime
root 4152 0.0 0.1 2931236 90860 ? Ssl 08:18 0:03 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/pause:3.10
# Container runtime endpoint: 컨테이너 런타임과 상호작용하기 위해 사용되는 인터페이스나 주소
- Service cluster의 IP range 확인
# kube-apiserver에서 관련 정보 확인 가능
controlplane /etc/kubernetes/manifests ➜ cat kube-apiserver.yaml
...
--service-cluster-ip-range=10.96.0.0/12
CNI
- 컨테이너 오케스트레이션 플랫폼에서 네트워크 설정을 관리하는 표준 인터페이스
- CNI 옵션 디렉토리 : /etc/cni/net.d/
- https://kubernetes.io/docs/concepts/cluster-administration/addons/
# CNI supported plugin의 모든 바이너리를 갖고있는 path: /opt/cni/bin
controlplane /opt/cni/bin ➜ ls
bandwidth dummy host-device LICENSE portmap sbr tuning
bridge firewall host-local loopback ptp static vlan
dhcp flannel ipvlan macvlan README.md tap vrf
# 쿠버네티스 클러스터에 사용되는 CNI Plugin: /etc/cni/net.d
controlplane ~ ➜ ls /etc/cni/net.d
10-flannel.conflist # flannel이 사용되고 있다.
controlplane /etc/cni/net.d ➜ cat 10-flannel.conflist
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
실습]
- Pod 상태 확인
controlplane ~ ➜ kubectl get pods
NAME READY STATUS RESTARTS AGE
app 0/1 ContainerCreating 0 2m21s
# 파드를 만들었는데 Running되지 않고 있다.
controlplane ~ ➜ kubectl describe pod app
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m43s default-scheduler Successfully assigned default/app to controlplane
Warning FailedCreatePodSandBox 2m42s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "3a05c1130abde21f375598b24d78be3e7b3090c56fe65bb2f5427d3d31de6a20": plugin type="weave-net" name="weave" failed (add): unable to allocate IP address: Post "http://127.0.0.1:6784/ip/3a05c1130abde21f375598b24d78be3e7b3090c56fe65bb2f5427d3d31de6a20": dial tcp 127.0.0.1:6784: connect: connection refused
Normal SandboxChanged 7s (x13 over 2m42s) kubelet Pod sandbox changed, it will be killed and re-created.
# Network가 configure되지 않음
controlplane ~/weave ➜ kubectl apply -f https://reweave.azurewebsites.net/k8s/v1.29/net.yaml
serviceaccount/weave-net unchanged
clusterrole.rbac.authorization.k8s.io/weave-net unchanged
clusterrolebinding.rbac.authorization.k8s.io/weave-net unchanged
role.rbac.authorization.k8s.io/weave-net unchanged
rolebinding.rbac.authorization.k8s.io/weave-net unchanged
daemonset.apps/weave-net configured
controlplane ~/weave ➜ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-77d6fd4654-jkfzv 1/1 Running 0 78m
coredns-77d6fd4654-zxk6n 1/1 Running 0 78m
etcd-controlplane 1/1 Running 0 78m
kube-apiserver-controlplane 1/1 Running 0 78m
kube-controller-manager-controlplane 1/1 Running 0 78m
kube-proxy-727lg 1/1 Running 0 78m
kube-scheduler-controlplane 1/1 Running 0 78m
weave-net-p78vm 2/2 Running 0 11s
- Weave-net 확인
controlplane /etc/cni/net.d ➜ ls
10-weave.conflist # 설치된 network solution이 weave임
controlplane /etc/cni/net.d ➜ cat 10-weave.conflist
{
"cniVersion": "0.3.0",
"name": "weave",
"plugins": [
{
"name": "weave",
"type": "weave-net",
"hairpinMode": true
},
{
"type": "portmap",
"capabilities": {"portMappings": true},
"snat": true
}
]
}
controlplane /etc/cni/net.d ➜ kubectl get pods -n kube-system | grep weave
weave-net-6rrv6 2/2 Running 1 (87m ago) 87m
weave-net-z2m9p 2/2 Running 0 86m
controlplane /etc/cni/net.d ➜ kubectl get pods -n kube-system -o wide| grep weave
weave-net-6rrv6 2/2 Running 1 (87m ago) 87m 192.5.113.6 controlplane <none> <none>
weave-net-z2m9p 2/2 Running 0 87m 192.5.113.8 node01 <none> <none>
# 각각 하나씩 배정되어 있다.
- Weave Net은 자동으로 IP 주소를 할당하는 IPAM 기능을 내장하고 있지만, 특정 설정을 사용자 정의하려면 CNI 구성을 편집해야한다.
{
"name": "weave",
"type": "weave-net",
"ipam": {
"type": "weave-ipam", # IPAM 플러그인의 유형을 지정
"subnet": "10.32.0.0/12"
}
}
- ipalloc-range 확인
# bridge 확인
controlplane /etc/cni/net.d ➜ ip addr show type bridge
4: weave: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UP group default qlen 1000
link/ether 52:3e:23:20:ae:c7 brd ff:ff:ff:ff:ff:ff
inet 10.244.0.1/16 brd 10.244.255.255 scope global weave
valid_lft forever preferred_lft forever
controlplane /etc/cni/net.d ➜ kubectl logs -n kube-system weave-net-z2m9p
Defaulted container "weave" out of: weave, weave-npc, weave-init (init)
INFO: 2025/02/07 09:04:31.416680 Command line options: map[conn-limit:200 datapath:datapath db-prefix:/weavedb/weave-net docker-api: expect-npc:true http-addr:127.0.0.1:6784 ipalloc-init:consensus=1 ipalloc-range:10.244.0.0/16 metrics-addr:0.0.0.0:6782 name:62:41:33:f4:0b:3c nickname:node01 no-dns:true no-masq-local:true port:6783]
# log를 통해서도 ipalloc-range 확인 가능
- node01의 default gateway 확인
controlplane /etc/cni/net.d ➜ ssh node01
node01 ~ ➜ ip route
default via 172.25.0.1 dev eth1
10.244.0.0/16 dev weave proto kernel scope link src 10.244.192.0 # 여기!
172.25.0.0/24 dev eth1 proto kernel scope link src 172.25.0.23
192.5.113.0/24 dev eth0 proto kernel scope link src 192.5.113.8
controlplane /etc/cni/net.d ➜ cat busybox.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: busybox
name: busybox
spec:
nodeName: node01 # node01에 Pod 생성
containers:
- args:
- sleep
- "1000"
image: busybox
name: busybox
controlplane /etc/cni/net.d ➜ kubectl apply -f busybox.yaml
pod/busybox created
controlplane /etc/cni/net.d ➜ kubectl get pods
NAME READY STATUS RESTARTS AGE
busybox 1/1 Running 0 18s
controlplane /etc/cni/net.d ➜ kubectl exec busybox -- ip route
default via 10.244.192.0 dev eth0 # default gateway
10.244.0.0/16 dev eth0 scope link src 10.244.192.1
반응형
'Container > Kubernetes' 카테고리의 다른 글
[K8S] Kubectl - jsonpath (0) | 2025.02.10 |
---|---|
[K8S] Deploy a Kubernetes Cluster using Kubeadm (0) | 2025.02.06 |
[K8S] Network Policy (0) | 2025.02.04 |
[K8S] Security Context (0) | 2025.02.03 |
[K8S] Image Security (0) | 2025.02.03 |