目录

kubernetes基础-集群从 1.20 升级到 1.22

因业务需求, kubernetes 集群需要从 1.20 升级到 1.22.

背景

Kubernetes 是不可以进行跨版本升级的,只能一个版本一个版本的升级,本次从 1.20 版本升级到 1.22 版本。 如果按以下方法升级,有使用调度器插件的,并且作为第一调度器部署的话,调度器会被替换成默认的镜像,配置文件也会被恢复到默认的配置。

升级 master 节点

升级集群需要先对master节点进行升级。

备份 etcd

etcdctl 执行备份

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
mkdir etcd-backup
cd etcd-backup

etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key   --endpoints=192.168.21.120:2379 snapshot save `hostname`-etcd_`date +%Y%m%d%H%M`.db

1.120:2379 snapshot save ./`hostname`-etcd_`date +%Y%m%d%H%M`.db
{"level":"info","ts":1676356145.779007,"caller":"snapshot/v3_snapshot.go:119","msg":"created temporary db file","path":"./ops-k8s-node1-etcd_202302141429.db.part"}
{"level":"info","ts":"2023-02-14T14:29:05.803+0800","caller":"clientv3/maintenance.go:200","msg":"opened snapshot stream; downloading"}
{"level":"info","ts":1676356145.8043346,"caller":"snapshot/v3_snapshot.go:127","msg":"fetching snapshot","endpoint":"192.168.21.120:2379"}
{"level":"info","ts":"2023-02-14T14:29:08.291+0800","caller":"clientv3/maintenance.go:208","msg":"completed snapshot read; closing"}
{"level":"info","ts":1676356148.3410313,"caller":"snapshot/v3_snapshot.go:142","msg":"fetched snapshot","endpoint":"192.168.21.120:2379","size":"75 MB","took":2.561458458}
{"level":"info","ts":1676356148.3413002,"caller":"snapshot/v3_snapshot.go:152","msg":"saved","path":"./ops-k8s-node1-etcd_202302141429.db"}
Snapshot saved at ./ops-k8s-node1-etcd_202302141429.db

验证备份数据

1
2
3
4
5
6
7
 etcdctl snapshot status master01-202209081433.db --write-out=table
 
+----------+----------+------------+------------+
|   HASH   | REVISION | TOTAL KEYS | TOTAL SIZE |
+----------+----------+------------+------------+
| 7806a3e5 | 57960523 |       3486 |      75 MB |
+----------+----------+------------+------------+

Master 节点都执行一下备份,恢复备份只需恢复其中一个节点的数据即可。

获取当前集群配置信息

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
[root@k8s-master-73 ~]# kubectl get cm -o yaml -n kube-system kubeadm-config > kubeadm_config.yaml

# 我们只需要拿到 configmap 里面 data.ClusterConfiguration 字段下的配置信息
apiServer:
  extraArgs:
    authorization-mode: Node,RBAC
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.168.21.120:6444
controllerManager:
  extraArgs:
    node-cidr-mask-size: "25"
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.21.14
networking:
  dnsDomain: cluster.local
  podSubnet: 10.224.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {}

查看 kubeadm 版本:

yum list --showduplicates kubeadm --disableexcludes=kubernetes

可以看到 1.21 的最新版本是 1.21.14 ,1.22 的最新版本是 1.22.17 ,我们先升级到 1.21.14

升级前置条件检查

先升级 kubeadm 到 1.21.14

1
2
3
4
yum install kubeadm-1.21.14 -y

# 升级到 1.22.17
# yum install kubeadm-1.22.17 -y

升级前,先对比一下会修改到哪些信息,我们好有个底:

1
2
3
4
kubeadm upgrade diff v1.21.14 --config kubeadm_config.yaml

# 升级到 1.22.17
# kubeadm upgrade diff v1.22.17 --config kubeadm_config.yaml

升级前预检测,看看有没有报错:

1
2
3
4
5
6
7
kubeadm upgrade apply v1.21.14 --config kubeadm_config.yaml --dry-run

# 升级到 1.22.17
# kubeadm upgrade apply v1.22.17 --config kubeadm_config.yaml --dry-run

# 最后输出这个代表没什么大问题
[dryrun] Finished dryrunning successfully!

执行 master 节点升级

先排空节点

1
2
3
kubectl drain ops-k8s-node1 --ignore-daemonsets
# 如果有pod没被控制器管理,可能会提示 --force 删除
# 如果有挂载空目录可能需要加这个参数 --delete-emptydir-data

验证升级计划

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
kubeadm upgrade plan --config kubeadm_config.yaml

[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.20.15
[upgrade/versions] kubeadm version: v1.21.14
I0214 15:10:24.774241   15892 version.go:254] remote version is much newer: v1.26.1; falling back to: stable-1.21
[upgrade/versions] Target version: v1.21.14
[upgrade/versions] Latest version in the v1.20 series: v1.20.15

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT        TARGET
kubelet     4 x v1.20.15   v1.21.14

Upgrade to the latest stable version:

COMPONENT                 CURRENT    TARGET
kube-apiserver            v1.20.15   v1.21.14
kube-controller-manager   v1.20.15   v1.21.14
kube-scheduler            v1.20.15   v1.21.14
kube-proxy                v1.20.15   v1.21.14
CoreDNS                   1.7.0      v1.8.0
etcd                      3.4.13-0   3.4.13-0

You can now apply the upgrade by executing the following command:

        kubeadm upgrade apply v1.21.14

_____________________________________________________________________


The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.

API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io   v1alpha1          v1alpha1            no
kubelet.config.k8s.io     v1beta1           v1beta1             no
_____________________________________________________________________

执行升级

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
kubeadm upgrade apply v1.21.14 --config kubeadm_config.yaml
# 需要根据提示按 y 确认
# worker 节点 只需要知悉 kubeadm upgrade node

# 升级到 1.22.17
# kubeadm upgrade apply v1.22.17 --config kubeadm_config.yaml
......
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.21.14". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

升级组件

1
2
3
4
5
6
7
8
yum install -y kubelet-1.21.14 kubectl-1.21.14 --disableexcludes=kubernetes

# 升级到 1.22.17
# yum install -y kubelet-1.22.17 kubectl-1.22.17 --disableexcludes=kubernetes

sudo systemctl daemon-reload
sudo systemctl restart kubelet
# 可能会出现重启kubelet后pod一直pending,重启多一次kubelet
注意
升级 1.22 版本的 kubelet 过程中,/opt/cni/bin/flannel 插件可能被删,导致报错:reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

取消节点驱逐

kubectl uncordon ops-k8s-node1

升级 worker 节点

升级 worker 节点就不用像 master 节点一样复杂,只需要2步比较重要的步骤即可。

升级 kubeadm

先升级 kubeadm 到 1.21.14

1
2
3
4
yum install kubeadm-1.21.14 -y

# 升级到 1.22.17
# yum install kubeadm-1.22.17 -y

再在 master 节点上执行命令排空节点

1
2
3
kubectl drain ops-k8s-node4 --ignore-daemonsets
# 如果有pod没被控制器管理,可能会提示 --force 删除
# 如果有挂载空目录可能需要加这个参数 --delete-emptydir-data

执行 worker 节点升级

执行升级

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
kubeadm upgrade node

[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks
[preflight] Skipping prepull. Not a control plane node.
[upgrade] Skipping phase. Not a control plane node.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.

升级组件

1
2
3
4
5
6
7
8
yum install -y kubelet-1.21.14 kubectl-1.21.14 --disableexcludes=kubernetes

# 升级到 1.22.17
# yum install -y kubelet-1.22.17 kubectl-1.22.17 --disableexcludes=kubernetes

sudo systemctl daemon-reload
sudo systemctl restart kubelet
# 可能会出现重启kubelet后pod一直pending,重启多一次kubelet

取消节点驱逐

kubectl uncordon ops-k8s-node1

其他 worker 节点重复2.1到2.2步骤。

从 1.21 升级到 1.22 ,重复步骤1到步骤2即可。

结束

至此升级结束,已经升级到了 1.22 版本。

/kubernetes%E5%9F%BA%E7%A1%80-%E9%9B%86%E7%BE%A4%E4%BB%8E-1.20-%E5%8D%87%E7%BA%A7%E5%88%B0-1.22/kubernetes%E5%8D%87%E7%BA%A7.png
kubernetes升级