etcd
쿠버네티스에서 운영되는 모든 objects는 Nodes, configs, Secret, Roles 등을 받아서 etcd에 저장한다. - 분산 키-값 저장소
따라서 etcd 데이터의 손실은 Kubernetes 클러스터 전체에 심각한 영향을 미칠 수 있으므로, 정기적인 백업과 복구 전략이 필수적이다.
etcd 클러스터는 여러 etcd instance가 서로 연결되어 데이터를 공유하고, 장애 조치(failover) 및 데이터 복제(replication)를 통해 데이터의 일관성과 가용성을 보장하는 구조를 가지고 있다.
etcd Cluster는 master node에 static pods로 호스트되며, cluster의 state와 같은 정보를 저장한다.
# ETCDCTL
ETCDCTL은 etcd를 위한 command line client이다.
- ETCDCTL은 Version 2 and Version 3를 사용하고, default로는 Version 2로 설정된다.
- back up과 restore와 같은 작업을 하기 위해서는 ETCDCTL_API를 3으로 설정해놔야 한다.
- export ETCDCTL_API=3
controlplane ~ ➜ export ETCDCTL_API=3
controlplane ~ ➜ etcdctl version
etcdctl version: 3.3.13
API version: 3.3
- version별로 각각 다른 command를 사용하니 주의한다.
# ETCDCTL version2
etcdctl backup
etcdctl cluster-health
etcdctl mk
etcdctl mkdir
etcdctl set
# ETCDCTL version3
etcdctl snapshot save
etcdctl endpoint health
etcdctl get
etcdctl put
# API 버전 설정을 위해서는 다음 명령어 실행
export ETCDCTL_API=3
# certificate file path
--cacert /etc/kubernetes/pki/etcd/ca.crt
--cert /etc/kubernetes/pki/etcd/server.crt
--key /etc/kubernetes/pki/etcd/server.key
kubectl exec etcd-master -n kube-system -- sh -c "ETCDCTL_API=3 etcdctl get / --prefix --keys-only --limit=10 --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key"
- listen-client-urls : etcd 서버가 클라이언트 요청을 수신하는 주소
- etcd가 설치된 노드에서 직접 실행하는 경우에는 해당 IP를 사용한다.
- advertise-client-urls: 클러스터 외부 클라이언트에게 알려주는 주소를 지정(클라이언트가 etcd 서버로 연결할 주소)
- Snapshot은 클라이언트 명령(etcdctl)로 실행되며, 클라이언트는 etcd 서버에 연결하기 위해 advertise-client-urls에 지정된 주소를 참조
- etcdctl 명령어를 etcd가 실행되지 않은 원격 노드에서 실행하려면, advertise-client-urls에 지정된 외부 접근 가능한 IP를 사용해야 한다.
- Multi node cluster의 경우에도 etcd 클러스터의 다른 노드에 Snapshot 명령을 실행하려면, 해당 노드의 advertise-client-urls에 설정된 IP를 사용한다.
controlplane /var/lib/kubelet ➜ cd /etc/kubernetes/manifests/
controlplane /etc/kubernetes/manifests ➜ cat etcd.yaml
apiVersion: v1
kind: Pod
...
containers:
- command:
- etcd
- --advertise-client-urls=https://192.168.75.172:2379
- --cert-file=/etc/kubernetes/pki/etcd/server.crt
- --client-cert-auth=true
- --data-dir=/var/lib/etcd
- --experimental-initial-corrupt-check=true
- --experimental-watch-progress-notify-interval=5s
- --initial-advertise-peer-urls=https://192.168.75.172:2380
- --initial-cluster=controlplane=https://192.168.75.172:2380
- --key-file=/etc/kubernetes/pki/etcd/server.key
- --listen-client-urls=https://127.0.0.1:2379,https://192.168.75.172:2379
- --listen-metrics-urls=http://127.0.0.1:2381
- --listen-peer-urls=https://192.168.75.172:2380
- --name=controlplane
- --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
- --peer-client-cert-auth=true
- --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
- --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
- --snapshot-count=10000
- --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
...
volumeMounts:
- mountPath: /var/lib/etcd
name: etcd-data
- mountPath: /etc/kubernetes/pki/etcd # etcd container의 volume
name: etcd-certs
hostNetwork: true
priority: 2000001000
priorityClassName: system-node-critical
securityContext:
seccompProfile:
type: RuntimeDefault
volumes:
- hostPath:
path: /etc/kubernetes/pki/etcd # control plane의 volume
type: DirectoryOrCreate
name: etcd-certs
- hostPath:
path: /var/lib/etcd # 모든 데이터를 저장하는 위치
type: DirectoryOrCreate
name: etcd-data
status: {}
# etcd가 정보를 저장하는 file 및 folder.
controlplane /etc/kubernetes/manifests ➜ ls /var/lib/etcd/
member
# etcd에 의해 사용되는 certi
controlplane /etc/kubernetes/manifests ➜ ls /etc/kubernetes/pki/etcd
ca.crt healthcheck-client.crt peer.crt server.crt
ca.key healthcheck-client.key peer.key server.key
- 만약 ETCDCTL_API 2를 사용하고 있다면, snapshot이 작동하지 않으므로 3로 사용한다.
controlplane /etc/kubernetes/manifests ➜ etcdctl snapshot
No help topic for 'snapshot'
# 3에서는 작동
controlplane /etc/kubernetes/manifests ➜ ETCDCTL_API=3 etcdctl snapshot
NAME:
snapshot - Manages etcd node snapshots
USAGE:
etcdctl snapshot <subcommand>
# 매번 ETCDCTL_API=3를 붙여주기는 번거로우니 export 해준다.
controlplane /etc/kubernetes/manifests ➜ export ETCDCTL_API=3
- etcdctl을 사용한 snapshot
# trusted-ca-file, cert-file and key-file은 etcd pod definition 에서 가져온다.
# ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 \
# --cacert=<trusted-ca-file> --cert=<cert-file> --key=<key-file> \
# snapshot save <backup-file-location>
#
controlplane /etc/kubernetes/manifests ➜ etcdctl snapshot save --endpoints=127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key /opt/snapshot-pre-boot.db
Snapshot saved at /opt/snapshot-pre-boot.db
controlplane /etc/kubernetes/manifests ➜ ls /opt/snapshot-pre-boot.db
/opt/snapshot-pre-boot.db
- etcd cluster restore하기
# export ETCDCTL_API=3 etcdctl --data-dir <data-dir-location> snapshot restore snapshot.db
controlplane /etc/kubernetes/manifests ➜ etcdctl snapshot restore --data-dir /var/lib/etcd-from-backup /opt/snapshot-pre-boot.db
2025-01-09 08:51:32.218641 I | mvcc: restore compact to 2495
2025-01-09 08:51:32.222226 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32
# 데이터 복구 완료
controlplane /etc/kubernetes/manifests ➜ ls /var/lib/etcd-from-backup/
member
# etcd yaml파일에서 new data directory로 바꿔주기.
controlplane /etc/kubernetes/manifests ➜ /etc/kubernetes/manifests/etcd.yaml
volumeMounts:
- mountPath: /var/lib/etcd # 해당 데이터가 mount된다.
name: etcd-data # 이 이름으로 밑의 volume과 연동된다.
- mountPath: /etc/kubernetes/pki/etcd
name: etcd-certs
hostNetwork: true
priority: 2000001000
priorityClassName: system-node-critical
securityContext:
seccompProfile:
type: RuntimeDefault
volumes:
- hostPath:
path: /etc/kubernetes/pki/etcd
type: DirectoryOrCreate
name: etcd-certs
- hostPath:
path: /var/lib/etcd-from-backup # 여기 바꿔준다.
type: DirectoryOrCreate
name: etcd-data
status: {}
# etcd restart 시키기
kubectl -n kube-system delete pod etcd-controlplane -n kube-system
- 외부 etcd 서버를 restore
# student-node에서 the etcd-server로 백업
student-node /opt ➜ scp /opt/cluster2.db etcd-server:/root
cluster2.db 100% 2232KB 102.3MB/s 00:00
# 다이렉트로 etcd-server에서 복구하는 작업이라 endpoint https:/127.0.0.1 사용중
etcd-server ~ ➜ ETCDCTL_API=3 etcdctl snapshot restore /root/cluster2.db --data-dir=/var/lib/etcd-data-new
{"level":"info","ts":1736416857.3161871,"caller":"snapshot/v3_snapshot.go:296","msg":"restoring snapshot","path":"/root/cluster2.db","wal-dir":"/var/lib/etcd-data-new/member/wal","data-dir":"/var/lib/etcd-data-new","snap-dir":"/var/lib/etcd-data-new/member/snap"}
{"level":"info","ts":1736416857.3319352,"caller":"mvcc/kvstore.go:388","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":6000}
{"level":"info","ts":1736416857.3374014,"caller":"membership/cluster.go:392","msg":"added member","cluster-id":"cdf818194e3a8c32","local-member-id":"0","added-peer-id":"8e9e05c52164694d","added-peer-peer-urls":["http://localhost:2380"]}
{"level":"info","ts":1736416857.4211009,"caller":"snapshot/v3_snapshot.go:309","msg":"restored snapshot","path":"/root/cluster2.db","wal-dir":"/var/lib/etcd-data-new/member/wal","data-dir":"/var/lib/etcd-data-new","snap-dir":"/var/lib/etcd-data-new/member/snap"}
# etcd user에 의해 소유되도록 permission 변경
etcd-server /var/lib ➜ chown -R etcd:etcd /var/lib/etcd-data-new/
# Update the systemd service
etcd-server /var/lib ➜ vi /etc/systemd/system/etcd.service
[Unit]
Description=etcd key-value store
Documentation=https://github.com/etcd-io/etcd
After=network.target
[Service]
User=etcd
Type=notify
ExecStart=/usr/local/bin/etcd \
--name etcd-server \
--data-dir=/var/lib/etcd-data-new \
--cert-file=/etc/etcd/pki/etcd.pem \
--key-file=/etc/etcd/pki/etcd-key.pem \
--peer-cert-file=/etc/etcd/pki/etcd.pem \
--peer-key-file=/etc/etcd/pki/etcd-key.pem \
--trusted-ca-file=/etc/etcd/pki/ca.pem \
--peer-trusted-ca-file=/etc/etcd/pki/ca.pem \
--peer-client-cert-auth \
--client-cert-auth \
--initial-advertise-peer-urls https://192.8.47.20:2380 \
--listen-peer-urls https://192.8.47.20:2380 \
--advertise-client-urls https://192.8.47.20:2379 \
--listen-client-urls https://192.8.47.20:2379,https://127.0.0.1:2379 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster etcd-server=https://192.8.47.20:2380 \
--initial-cluster-state new
Restart=on-failure
RestartSec=5
LimitNOFILE=40000
[Install]
WantedBy=multi-user.target
# reload and restart the etcd service.
etcd-server /var/lib ➜ systemctl daemon-reload
etcd-server /var/lib ➜ systemctl restart etcd
etcd-server /var/lib ➜ systemctl status etcd
● etcd.service - etcd key-value store
Loaded: loaded (/etc/systemd/system/etcd.service; enabled; vendor pres
et: enabled)
Active: active (running) since Thu 2025-01-09 10:11:19 UTC;
5s ago
Docs: https://github.com/etcd-io/etcd
# It is recommended to restart controlplane components (e.g. kube-scheduler, kube-controller-manager, kubelet) to ensure that they don't rely on some stale data.
student-node /opt ➜ kubectl delete pods kube-controller-manager-cluster2-controlplane kube-scheduler-cluster2-controlplane -n kube-system
pod "kube-controller-manager-cluster2-controlplane" deleted
pod "kube-scheduler-cluster2-controlplane" deleted
ssh cluster2-controlplane
systemctl restart kubelet
systemctl status kubelet
- etcd cluster 조회
# external etcd-cluster
student-node ~ ➜ ssh etcd-server
etcd-server ~ ➜ ps -ef | grep -i etcd
etcd 820 1 0 08:45 ? 00:00:41 /usr/local/bin/etcd --name etcd-server --data-dir=/var/lib/etcd-data --cert-file=/etc/etcd/pki/etcd.pem --key-file=/etc/etcd/pki/etcd-key.pem --peer-cert-file=/etc/etcd/pki/etcd.pem --peer-key-file=/etc/etcd/pki/etcd-key.pem --trusted-ca-file=/etc/etcd/pki/ca.pem --peer-trusted-ca-file=/etc/etcd/pki/ca.pem --peer-client-cert-auth --client-cert-auth --initial-advertise-peer-urls https://192.8.47.20:2380 --listen-peer-urls https://192.8.47.20:2380 --advertise-client-urls https://192.8.47.20:2379 --listen-client-urls https://192.8.47.20:2379,https://127.0.0.1:2379 --initial-cluster-token etcd-cluster-1 --initial-cluster etcd-server=https://192.8.47.20:2380 --initial-cluster-state new
root 1046 969 0 09:32 pts/0 00:00:00 grep -i etcd
etcd-server ~ ➜ ETCDCTL_API=3 etcdctl --endpoints=127.0.0.1:2379 --cacert=/etc/etcd/pki/ca.pem --cert=/etc/etcd/pki/etcd.pem --key=/etc/etcd/pki/etcd-key.pem member list
7a9b662b8a759cb7, started, etcd-server, https://192.8.47.20:2380, https://192.8.47.20:2379, false
# ETCD cluster에 하나의 노드 작동중
Periodically backing up the etcd cluster data is important to recover Kubernetes clusters under disaster scenarios, such as losing all control plane nodes. The snapshot file contains all the Kubernetes state and critical information. In order to keep the sensitive Kubernetes data safe, encrypt the snapshot files.
Backing up an etcd cluster can be accomplished in two ways: etcd built-in snapshot and volume snapshot
1. create a snapshot of the existing etcd instance running at https://127.0.0.1:2379, saving the snapshot to /data/etcd-snapshot.db
2. Next, restore an existing, previous snapshot located at /data/etcd-snapshot-previous.db
backup : snapshop을 떠서 저장한다.
'Container > Kubernetes' 카테고리의 다른 글
[K8S] Logging & Monitoring (1) | 2024.04.26 |
---|---|
[K8S] kube-scheduler (0) | 2024.04.25 |
CKA 시험준비 (0) | 2024.04.16 |
[K8S] cordon & drain (0) | 2024.04.13 |
[K8S] Taint & Toleration (0) | 2024.04.12 |