【云原生】Kubernetes集群升级指南
- 前言
- 一、集群升级过程辅助命令
- 二、升级master节点
- 2.1、升级kubeadm。
- 2.2、验证升级计划
- 2.3、master节点升级
- 三、升级node节点
- 总结
前言
本文演示kubernetes集群从v1.24.1升级到v1.25.5。
相关文档。
一、集群升级过程辅助命令
(1)查看节点上运行的pod。
kubectl get pod -o wide |grep <nodename>
(2)查看集群配置文件。
kubectl -n kube-system get cm kubeadm-config -o yaml
(3)查看当前集群节点。
kubectl get node
二、升级master节点
2.1、升级kubeadm。
# 更新包管理器
sudo apt-get update
# 查看可用版本
apt-cache madison kubeadm
# 解除 kubeadm软件包保留状态
sudo apt-mark unhold kubeadm
# 安装
sudo apt-get install -y kubeadm=1.25.5-00
# 设置为保留,即不自动更新
sudo apt-mark hold kubeadm
# 验证版本
kubeadm version
2.2、验证升级计划
(1)检查可升级到哪些版本,并验证你当前的集群是否可升级。
sudo kubeadm upgrade plan
_____________________________________________________________________
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT TARGET
kubelet 1 x v1.24.1 v1.25.8
Upgrade to the latest stable version:
COMPONENT CURRENT TARGET
kube-apiserver v1.24.1 v1.25.8
kube-controller-manager v1.24.1 v1.25.8
kube-scheduler v1.24.1 v1.25.8
kube-proxy v1.24.1 v1.25.8
CoreDNS v1.8.6 v1.9.3
etcd 3.5.3-0 3.5.6-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.25.8
Note: Before you can perform this upgrade, you have to update kubeadm to v1.25.8.
_____________________________________________________________________
注意下面的MANUAL字段:
_____________________________________________________________________
The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.
API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io v1alpha1 v1alpha1 no
kubelet.config.k8s.io v1beta1 v1beta1 no
_____________________________________________________________________
指示哪些主键需要手动升级,如果是yes就要手动升级。
(2)显示哪些差异将被应用于现有的静态 pod 资源清单。
sudo kubeadm upgrade diff 1.25.5
[upgrade/diff] Reading configuration from the cluster...
[upgrade/diff] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
--- /etc/kubernetes/manifests/kube-scheduler.yaml
+++ new manifest
@@ -16,7 +16,7 @@
- --bind-address=127.0.0.1
- --kubeconfig=/etc/kubernetes/scheduler.conf
- --leader-elect=true
- image: registry.aliyuncs.com/google_containers/kube-scheduler:v1.24.1
+ image: registry.aliyuncs.com/google_containers/kube-scheduler:1.25.5
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
--- /etc/kubernetes/manifests/kube-apiserver.yaml
+++ new manifest
@@ -40,7 +40,7 @@
- --service-cluster-ip-range=10.96.0.0/12
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
- image: registry.aliyuncs.com/google_containers/kube-apiserver:v1.24.1
+ image: registry.aliyuncs.com/google_containers/kube-apiserver:1.25.5
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
--- /etc/kubernetes/manifests/kube-controller-manager.yaml
+++ new manifest
@@ -28,7 +28,7 @@
- --service-account-private-key-file=/etc/kubernetes/pki/sa.key
- --service-cluster-ip-range=10.96.0.0/12
- --use-service-account-credentials=true
- image: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.24.1
+ image: registry.aliyuncs.com/google_containers/kube-controller-manager:1.25.5
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
2.3、master节点升级
(1)升级到 1.25.5版本,此命令仅升级master节点(control plane)。
sudo kubeadm upgrade apply v1.25.5
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.25.5"
[upgrade/versions] Cluster version: v1.24.1
[upgrade/versions] kubeadm version: v1.25.5
[upgrade] Are you sure you want to proceed? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.25.5" (timeout: 5m0s)...
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-03-19-08-29-54/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests1584419494"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-03-19-08-29-54/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-03-19-08-29-54/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-03-19-08-29-54/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Removing the old taint &Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,} from all control plane Nodes. After this step only the &Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,} taint will be present on control plane Nodes.
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.25.5". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
(2) 腾空节点,即将节点上除守护进程之外的其他进程调度到其他节点,同时将开启调度保护。
kubectl drain <nodename> --ignore-daemonsets
$ kubectl drain k8s-master1 --ignore-daemonsets
node/k8s-master1 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-flannel/kube-flannel-ds-nxz4d, kube-system/kube-proxy-pbnk4
evicting pod kube-system/coredns-c676cc86f-twm96
evicting pod kube-system/coredns-c676cc86f-mdgbn
pod/coredns-c676cc86f-mdgbn evicted
pod/coredns-c676cc86f-twm96 evicted
node/k8s-master1 drained
$ kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-nxz4d 1/1 Running 0 136m
kube-system coredns-c676cc86f-7stvs 0/1 Pending 0 60s
kube-system coredns-c676cc86f-vmkgv 0/1 Pending 0 60s
kube-system etcd-k8s-master1 1/1 Running 0 11m
kube-system kube-apiserver-k8s-master1 1/1 Running 0 10m
kube-system kube-controller-manager-k8s-master1 1/1 Running 0 10m
kube-system kube-proxy-pbnk4 1/1 Running 0 9m44s
kube-system kube-scheduler-k8s-master1 1/1 Running 0 9m58s
$ kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready,SchedulingDisabled control-plane 162m v1.24.1
(3)升级kubelet与kubectl组件。
sudo apt-mark unhold kubelet kubectl
sudo apt-get install -y kubelet=1.25.5-00 kubectl=1.25.5-00
sudo apt-mark hold kubelet kubectl
(4)重启 kubelet。
sudo systemctl daemon-reload
sudo systemctl restart kubelet
(5)解除调度保护。
kubectl uncordon <nodename>
三、升级node节点
(1)升级节点kubelet 配置。
sudo kubeadm upgrade node
(2)腾空节点,同时开启调度保护,此命令请在master节点操作
kubectl drain <nodename> --ignore-daemonsets
(3)升级kubelet与kubectl组件。
sudo apt-mark unhold kubelet kubectl
sudo apt-get install -y kubelet=1.25.5-00 kubectl=1.25.5-00
sudo apt-mark hold kubelet kubectl
(4)重启 kubelet。
sudo systemctl daemon-reload
sudo systemctl restart kubelet
(5)解除调度保护,master节点上执行该命令。
kubectl uncordon <nodename>
总结
每个版本的升级都不一样,所以要根据版本进行适当调整,不作为万能指导。
升级过程:
- 升级master组件。
- 升级worker节点组件,调度保护、排空节点、worker节点组件升级、解除保护。