1. 切换的原因
-
性能提升:Containerd通过减少抽象层提升了整体性能。
-
安全性增强:它提供了更直接的系统调用,减少了潜在的安全风险。
-
简化架构:Containerd拥有更简洁的设计,使得维护和故障排除更为容易。
-
官方支持趋势:Kubernetes官方团队逐步转向对Containerd的支持。
-
cadvisor采集不到container级别的资源使用情况。
2. 准备工作
-
Kubernetes版本至少为1.20+,因为较新版本已经弃用了Docker作为默认容器运行时。
-
备份现有配置和数据,以防迁移过程中出现问题。
-
确认所有节点上的操作系统兼容Containerd。
3. 迁移步骤
3.1. 前置步骤
# 腾空节点
$ kubectl drain --ignore-daemonsets k8s-node02
node/master03 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-fbzqw, kube-system/kube-proxy-95hst
node/master03 drained
# 停止相关服务
$ systemctl stop kubelet
$ systemctl stop containerd
$ systemctl disable docker --now
3.2. 部署containerd服务
参考 Kubernetes实战(三十一)-安装containerd_containerd配置镜像仓库-CSDN博客
3.3. 拷贝其他节点的镜像配置
$ scp -r /etc/containerd/certs.d 172.139.20.75:/tmp
$ sudo mv /tmp/certs.d /etc/containerd/
$ sudo crictl pull 172.139.20.170:5000/library/pause:3.9
3.4. 修改kubelet参数
$ vim /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--container-runtime-endpoint=unix:///run/containerd/containerd.sock --pod-infra-container-image=172.139.20.170:5000/library/pause:3.9"
Tip:修改container-runtime-endpoint参数
3.5. 重启kubelet服务
$ systemctl start kubelet
3.6. 恢复可调度
$ kubectl uncordon k8s-node02
4. 验证
4.1. 查看节点runtime
$ kubectl get nodes k8s-node02 -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-node02 Ready <none> 152d v1.27.16 172.139.20.75 <none> CentOS Linux 7 (Core) 5.4.278-1.el7.elrepo.x86_64 containerd://1.6.34
4.2. 确保节点能正常启动pod
$ kubectl get pod -owide -A| grep k8s-node02
gitlab gitlab-gitaly-0 0/1 Init:CrashLoopBackOff 8 (15s ago) 2d5h 10.244.58.246 k8s-node02 <none> <none>
gitlab gitlab-gitlab-shell-66c9f95c84-zz5ld 0/1 Init:CrashLoopBackOff 8 (48s ago) 2d5h 10.244.58.242 k8s-node02 <none> <none>
gitlab gitlab-kas-7b987c7c4c-2qcvh 1/1 Running 2 2d5h 10.244.58.245 k8s-node02 <none> <none>
harbor harbor-core-58cff667b8-275gc 1/1 Running 1 6h19m 10.244.58.237 k8s-node02 <none> <none>
harbor harbor-exporter-8bf987cb7-t5rbs 1/1 Running 10 2d3h 10.244.58.232 k8s-node02 <none> <none>
harbor harbor-jobservice-847b864c7f-hgr4b 1/1 Running 2 (6m37s ago) 6h19m 10.244.58.233 k8s-node02 <none> <none>
harbor harbor-registry-656468fdb4-ck5cn 2/2 Running 2 6h20m 10.244.58.234 k8s-node02 <none> <none>
kube-system calico-node-ns7s9 1/1 Running 2 2d 172.139.20.75 k8s-node02 <none> <none>
kube-system calicoctl-5976bd6ff7-jjsxh 1/1 Running 2 2d6h 172.139.20.75 k8s-node02 <none> <none>
kube-system cert-manager-bb9c6b9ff-6jqc9 0/1 ImagePullBackOff 1 (24h ago) 2d6h 10.244.58.236 k8s-node02 <none> <none>
kube-system cert-manager-cainjector-bbf9dcf9b-cwtg9 1/1 Running 3 2d6h 10.244.58.239 k8s-node02 <none> <none>
kube-system coredns-68d697fb69-lc7sc 1/1 Running 2 2d6h 10.244.58.235 k8s-node02 <none> <none>
kube-system csi-cephfsplugin-m6b66 3/3 Running 6 2d6h 172.139.20.75 k8s-node02 <none> <none>
kube-system csi-rbdplugin-provisioner-867ccf48d7-85z5k 7/7 Running 25 2d6h 10.244.58.244 k8s-node02 <none> <none>
kube-system csi-rbdplugin-sq5q8 3/3 Running 6 2d6h 172.139.20.75 k8s-node02 <none> <none>
kube-system grafana-cdn-8c6cddddc-ft745 1/1 Running 2 2d6h 10.244.58.249 k8s-node02 <none> <none>
kube-system ingress-nginx-controller-7bfc8b7797-gktxv 1/1 Running 3 2d6h 10.244.58.240 k8s-node02 <none> <none>
kube-system kube-proxy-tl7cw 1/1 Running 2 2d1h 172.139.20.75 k8s-node02 <none> <none>
kube-system kube-state-metrics-76f85d6777-pl5t8 1/1 Running 4 2d6h 10.244.58.243 k8s-node02 <none> <none>
kube-system minio-1 1/1 Running 1 6h25m 10.244.58.248 k8s-node02 <none> <none>
kube-system nfs-dynamic-provisioner-7d7ccf88fb-hg779 1/1 Running 4 2d6h 10.244.58.241 k8s-node02 <none> <none>
kube-system node-exporter-n8jbl 1/1 Running 2 2d6h 172.139.20.75 k8s-node02 <none> <none>
kube-system prometheus-dd5446b59-cdrfg 2/2 Running 0 2m29s 10.244.58.250 k8s-node02 <none> <none>
kube-system smtp-proxy-network-7f97749595-bmcw2 1/1 Running 2 2d6h 10.244.58.238 k8s-node02 <none> <none>