hello云胜

技术与生活

0%

etcdctl使用

没有本地安装,直接使用的pod里的etcd

1
2
3
4
5
6
```



查看谁是主节点

etcdctl –endpoints=127.0.0.1:2379 –cacert=/etc/kubernetes/pki/etcd/ca.crt –cert=etc/kubernetes/pki/etcd/peer.crt –key=/etc/kubernetes/pki/etcd/peer.key endpoint status –cluster -w table

1
2
3
4
5

命令太长了,不方便

配置一个短命令别名

alias ec=”etcdctl –endpoints=127.0.0.1:2379 –cacert=/etc/kubernetes/pki/etcd/ca.crt –cert=etc/kubernetes/pki/etcd/peer.crt –key=/etc/kubernetes/pki/etcd/peer.key”

1
2
3
4
5



使用

ec endpoint status –cluster -w table

1
2
3
4
5
6
7
8
9
10
11

![image-20231123103440140](D:\github\docs\云原生\k8s\etcdctl使用.assets\image-20231123103440140.png)





本地安装也简单

下载解压,把etcdctl复制到/user/local/bin

https://github.com/etcd-io/etcd/releases

1
2
3

连接信息,可以看/etc/kubernetes/manifests/kube-apiserver.yaml

- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
1
2
3

组装到命令中就是

etcdctl –endpoints=127.0.0.1:2379 –cacert=/etc/kubernetes/pki/etcd/ca.crt –cert=/etc/kubernetes/pki/apiserver-etcd-client.crt –key=/etc/kubernetes/pki/apiserver-etcd-client.key endpoint status –cluster -w table

1
2
3
4
5
6
7
8
9
10
11
12
13

为了方便

````
alias ec="etcdctl --endpoints=127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/apiserver-etcd-client.crt --key=/etc/kubernetes/pki/apiserver-etcd-client.key"
````



## etcd备份

备份命令

ec snapshot save /tmp/etcd.db

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19

还是挺快的 "size":"360 MB","took":"4 seconds ago"



## etcd恢复

#### 停止apiserver服务

此操作需要在所有master上执行。

kube-apiserver的启动方式是静态pod,所以只需将kube-apiserver的yml文件移除/etc/kubernetes/manifests,kubelete就会自动删除kube-apiserver的docker 容器。

```shell
mkdir /etc/kubernetes/manifests.bak
mv /etc/kubernetes/manifests/kube-apiserver.yaml /etc/kubernetes/manifests.bak
#检查apiserver是否已经停止,返回为0则证明服务停止
ps -ef|grep kube-api|grep -v grep |wc -l
0

停止etcd服务

此操作需要在所有master上执行

1
systemctl stop etcd

恢复etcd数据

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
#拷贝备份数据到另外两台master
scp /tmp/etcd.db master-02:/tmp/etcd.db
scp /tmp/etcd.db master-03:/tmp/etcd.db

#在每台master上分别执行以下命令
#备份原来的etcd数据
mv /var/lib/etcd /var/lib/etcd.bak
#执行恢复操作
etcdctl snapshot restore /tmp/etcd.db --endpoints=$ETCD_ADVERTISE_CLIENT_URLS \
--name=$ETCD_NAME \
--cacert=$ETCD_TRUSTED_CA_FILE \
--key=$ETCD_KEY_FILE \
--cert=$ETCD_CERT_FILE \
--initial-advertise-peer-urls=$ETCD_INITIAL_ADVERTISE_PEER_URLS \
--initial-cluster-token=$ETCD_INITIAL_CLUSTER_TOKEN \
--initial-cluster=$ETCD_INITIAL_CLUSTER \
--data-dir=$ETCD_DATA_DIR

这些参数可以在/etc/kubernetes/manifests/etcd.yaml中找到

注意,initial-cluster应该写全。有的节点的etcd.yaml是不全的

启动etcd

每台master执行

1
systemctl start etcd

启动kube-apiserver

每台master执行

1
2
3
4
# 把kube-apiserver.yaml移回来就行
mv /etc/kubernetes/manifests.bak/kube-apiserver.yaml /etc/kubernetes/manifests
#检查kube-apiserver服务是否启动
ps -ef|grep kube-api|grep -v grep