Bootstrap

k8s证书续期及版本升级

k8s证书续期及版本升级

ETCD 备份

查看 ETCD 数据

默认情况下 k8setcd 部署资源清单文件在 Master 节点的 /etc/kubernetes/manifests/etcd.yaml 中。

[root@zxmaster1 manifests]# pwd
/etc/kubernetes/manifests
[root@zxmaster1 manifests]# ls
etcd.yaml  kube-apiserver.yaml  kube-controller-manager.yaml  kube-scheduler.yaml

可以通过 cat etcd.yaml 查看详细参数。

spec:
  containers:
  - command:
    - etcd
    - --advertise-client-urls=https://192.168.34.10:2379
    - --cert-file=/etc/kubernetes/pki/etcd/server.crt
    - --client-cert-auth=true
    - --data-dir=/var/lib/etcd
    - --initial-advertise-peer-urls=https://192.168.34.10:2380
    - --initial-cluster=zxmaster1=https://192.168.34.10:2380
    - --key-file=/etc/kubernetes/pki/etcd/server.key
    - --listen-client-urls=https://127.0.0.1:2379,https://192.168.34.10:2379
    - --listen-metrics-urls=http://127.0.0.1:2381
    - --listen-peer-urls=https://192.168.34.10:2380
    - --name=zxmaster1
    - --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
    - --peer-client-cert-auth=true
    - --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
    - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
    - --snapshot-count=10000
    - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
    image: k8s.gcr.io/etcd:3.4.3-0
    imagePullPolicy: IfNotPresent

可以看到 etcd 的数据存储位置为 /var/lib/etcd

备份 ETCD 数据

备份到 /var/lib/etcd_backup 目录下。

mkdir -p /var/lib/etcd_backup/
export ETCDCTL_API=3 
etcdctl snapshot save /var/lib/etcd_backup/etcd_$(date "+%Y%m%d%H%M%S").db --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key --cacert=/etc/kubernetes/pki/etcd/ca.crt

运行效果如下:

[root@zxmaster1 manifests]# etcdctl snapshot save /var/lib/etcd_backup/etcd_$(date "+%Y%m%d%H%M%S").db --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key --cacert=/etc/kubernetes/pki/etcd/ca.crt
Snapshot saved at /var/lib/etcd_backup/etcd_20220124143326.db

依次备份所有Master节点上的Etcd!

如果其他Master节点没有安装 etcdctl 工具,可以通过 scp 从主 Master节点拷贝过去。

[root@zxmaster1 etcd_backup]# scp /usr/bin/etcdctl 192.168.34.11:/usr/bin/
[email protected]'s password:
etcdctl                                                                                                                                                        100%   20MB 106.7MB/s   00:00
[root@zxmaster1 etcd_backup]# scp /usr/bin/etcdctl 192.168.34.12:/usr/bin/
[email protected]'s password:
etcdctl                                                                                                                                                        100%   20MB 106.3MB/s   00:00
[root@zxmaster1 etcd_backup]#
[root@zxmaster2 ~]# export ETCDCTL_API=3
[root@zxmaster2 ~]# etcdctl snapshot save /var/lib/etcd_backup/etcd_$(date "+%Y%m%d%H%M%S").db --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key --cacert=/etc/kubernetes/pki/etcd/ca.crt
Snapshot saved at /var/lib/etcd_backup/etcd_20220124143749.db
[root@zxmaster3 ~]# export ETCDCTL_API=3
[root@zxmaster3 ~]# etcdctl snapshot save /var/lib/etcd_backup/etcd_$(date "+%Y%m%d%H%M%S").db --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key --cacert=/etc/kubernetes/pki/etcd/ca.crt
Snapshot saved at /var/lib/etcd_backup/etcd_20220124143811.db

kubeadm 证书续期

kubeadm 默认证书为一年,一年过期后,会导致api service不可用,使用过程中会出现:x509: certificate has expired or is not yet valid.

证书默认存放目录:/etc/kubernetes/pki

查看 k8s 证书有效期

查看k8s中所有证书的到期时间

kubeadm alpha certs check-expiration
[root@zxmaster1 ~]# kubeadm alpha certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Jan 27, 2022 03:26 UTC   5d                                      no
apiserver                  Jan 27, 2022 03:26 UTC   5d              ca                      no
apiserver-etcd-client      Jan 27, 2022 03:26 UTC   5d              etcd-ca                 no
apiserver-kubelet-client   Jan 27, 2022 03:26 UTC   5d              ca                      no
controller-manager.conf    Jan 27, 2022 03:26 UTC   5d                                      no
etcd-healthcheck-client    Jan 27, 2022 03:26 UTC   5d              etcd-ca                 no
etcd-peer                  Jan 27, 2022 03:26 UTC   5d              etcd-ca                 no
etcd-server                Jan 27, 2022 03:26 UTC   5d              etcd-ca                 no
front-proxy-client         Jan 27, 2022 03:26 UTC   5d              front-proxy-ca          no
scheduler.conf             Jan 27, 2022 03:26 UTC   5d                                      no

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Jan 25, 2031 03:26 UTC   9y              no
etcd-ca                 Jan 25, 2031 03:26 UTC   9y              no
front-proxy-ca          Jan 25, 2031 03:26 UTC   9y              no
[root@zxmaster1 ~]#

当然,也可以用 openssl 查看证书有效期:

查看CA证书过期时间:

openssl x509 -in /etc/kubernetes/pki/ca.crt -noout -text |grep Not
[root@zxmaster1 ~]# openssl x509 -in /etc/kubernetes/pki/ca.crt -noout -text |grep Not
            Not Before: Jan 27 03:26:45 2021 GMT
            Not After : Jan 25 03:26:45 2031 GMT
[root@zxmaster1 ~]#

查看集群证书过期时间:

openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text |grep Not 
[root@zxmaster1 ~]# openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text |grep Not
            Not Before: Jan 27 03:26:45 2021 GMT
            Not After : Jan 27 03:26:45 2022 GMT
[root@zxmaster1 ~]#

备份证书文件

/etc/kubernetes/pki 目录下的相关证书文件备份。

cp -R /etc/kubernetes/pki /etc/kubernetes/pki_backup

获取当前集群配置文件

如果集群证书还没有过期,则先获取当前集群的配置文件。

kubeadm config view > kubeadm.yaml
[root@zxmaster1]# kubeadm config view > kubeadm.yaml
[root@zxmaster1]# ls
kubeadm-config.yaml  kubeadm.yaml  kubernetes-svc.yaml  nginx-pod.yaml  traefik
[root@zxmaster1 test]# cat kubeadm.yaml
apiServer:
  certSANs:
  - 192.168.34.10
  - 192.168.34.11
  - 192.168.34.12
  - 192.168.34.20
  - 192.168.34.21
  - 192.168.34.22
  - 192.168.34.9
  extraArgs:
    authorization-mode: Node,RBAC
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.168.34.9:6443
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.18.2
networking:
  dnsDomain: cluster.local
  podSubnet: 192.168.0.0/18
  serviceSubnet: 10.96.0.0/12
scheduler: {}

证书续期

全部证书文件更新

kubeadm alpha certs renew all --config kubeadm.yaml

实际效果:

[root@zxmaster1 ~]# kubeadm alpha certs renew all --config kubeadm.yaml
W0124 14:47:59.970337     865 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewed
[root@zxmaster1 ~]#

再次查看证书有效期:

[root@zxmaster1 ~]# kubeadm alpha certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Jan 24, 2023 06:48 UTC   364d                                    no
apiserver                  Jan 24, 2023 06:48 UTC   364d            ca                      no
apiserver-etcd-client      Jan 24, 2023 06:48 UTC   364d            etcd-ca                 no
apiserver-kubelet-client   Jan 24, 2023 06:48 UTC   364d            ca                      no
controller-manager.conf    Jan 24, 2023 06:48 UTC   364d                                    no
etcd-healthcheck-client    Jan 24, 2023 06:48 UTC   364d            etcd-ca                 no
etcd-peer                  Jan 24, 2023 06:48 UTC   364d            etcd-ca                 no
etcd-server                Jan 24, 2023 06:48 UTC   364d            etcd-ca                 no
front-proxy-client         Jan 24, 2023 06:48 UTC   364d            front-proxy-ca          no
scheduler.conf             Jan 24, 2023 06:48 UTC   364d                                    no

单个证书文件更新

生成证书有两种方式一种是重新生成私钥和公钥,一种是使用存在的私钥重新签发。
生成相应组件的证书(此格式会使用相应组件的原来的key重新签发证书)。

alpha phase certs renew etcd-healthcheck-client --config kubeadm.yaml
alpha phase certs renew etcd-peer --config kubeadm.yaml
alpha phase certs renew etcd-server --config kubeadm.yaml
alpha phase certs renew front-proxy-client--config kubeadm.yaml
alpha phase certs renew apiserver-etcd-client --config kubeadm.yaml
alpha phase certs renew apiserver-kubelet-client --config kubeadm.yaml
alpha phase certs renew apiserver --config kubeadm.yaml
alpha phase certs renew all --config kubeadm.yaml
kubeadm alpha certs renew all --config=kubeadm.yaml    
init phase certs apiserver --config kubeadm.yaml

重新分发证书文件

root 管理员账号:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

更新用户授权访问不同命名空间权限文件

参考《用户管理部署文档》。

创建K8S命名空间

创建所需命名空间:

[root@zxmaster1 ~]# pwd
/etc/k8s-user/yaml/developer
[root@zxmaster1 ~]# vi developer-ns.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: developer
[root@zxmaster1 ~]# kubectl apply -f developer-ns.yaml

CA签名用户证书

重新生成K8S 客户端证书:K8SCA 签发客户端(用户)证书。

[root@zxmaster1 ~]# ls /etc/kubernetes/pki/ca.crt 
/etc/kubernetes/pki/ca.crt

编写签名请求文件:

配置证书生成策略,让CA软件知道颁发有什么功能的证书。

# ca-config.json
cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "8760h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "8760h"
      }
    }
  }
}
EOF

ca-config.json 文件为所有用户签名时都用到。

8760h 为证书有效期,即1年。

developer-csr.json ,续签时无需修改该文件。

[root@zxmaster1 kalami]# vi platdev-mage-csr.json
{
    "CN": "developer",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}

注意: “CN” 字段对应用户对应命名空间名!

签名用户证书:

cfssl gencert -ca=/etc/kubernetes/pki/ca.crt -ca-key=/etc/kubernetes/pki/ca.key -config=../ca-config.json -profile=kubernetes ./developer-csr.json | cfssljson -bare developer
[root@zxmaster1 kalami]# cfssl gencert -ca=/etc/kubernetes/pki/ca.crt -ca-key=/etc/kubernetes/pki/ca.key -config=../ca-config.json -profile=kubernetes ./developer-csr.json | cfssljson -bare developer
2022/01/27 16:41:51 [INFO] generate received request
2022/01/27 16:41:51 [INFO] received CSR
2022/01/27 16:41:51 [INFO] generating key: rsa-2048
2022/01/27 16:41:51 [INFO] encoded CSR
2022/01/27 16:41:51 [INFO] signed certificate with serial number 505822937748647806441919066773719156796935684110
2022/01/27 16:41:51 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@zxmaster1 kalami]#

生成 developer.csr 、 developer-key.pem developer.pem 三个文件。

生成kubeconfig授权文件

集群参数设置:

kubectl config set-cluster kubernetes --server=https://192.168.34.9:6443 --certificate-authority=/etc/kubernetes/pki/ca.crt --kubeconfig=developer.kubeconfig  --embed-certs=true
[root@zxmaster1 kalami]# kubectl config set-cluster kubernetes --server=https://192.168.34.9:6443 --certificate-authority=/etc/kubernetes/pki/ca.crt --kubeconfig=platdev-mage.kubeconfig  --embed-certs=true
Cluster "kubernetes" set.
[root@zxmaster1 kalami]#

生成 developer.kubeconfig 文件。

用户参数设置:

生成 Context 内容:

kubectl config set-context developer-context --cluster=kubernetes  --user=developer --namespace=developer --kubeconfig=developer.kubeconfig
[root@zxmaster1 kalami]# kubectl config set-context developer-context --cluster=kubernetes  --user=developer --namespace=developer --kubeconfig=developer.kubeconfig
Context "developer-context" modified.
[root@zxmaster1 kalami]#

设置默认Context :

kubectl config use-context developer-context --kubeconfig=developer.kubeconfig

用户认证:

kubectl config set-credentials developer --client-certificate=./developer.pem  --client-key=./developer-key.pem --embed-certs=true --kubeconfig=developer.kubeconfig

至此,developer.kubeconfig 文件已经可以使用了。但是需要操作系统需要配置KUBECONFIG环境变量或者执行kubectl命令时直接--kubeconfig指定developer.kubeconfig。但是在实际应用过程中,为了方便,可以在操作系统中创建developer用户,将developer.kubeconfig文件复制到$HOME/.kube目录下,并且重命令为config

配置文件分发给用户:

 scp developer.kubeconfig 192.168.34.21:/home/developer/.kube/config
[root@zxmaster1 kalami]# ls
developer.csr  developer-csr.json  developer-key.pem  developer.kubeconfig  developer-ns.yaml  developer.pem
[root@zxmaster1 kalami]#
[root@zxmaster1 kalami]# scp developer.kubeconfig 192.168.34.21:/home/developer/.kube/config
[email protected]'s password:
developer.kubeconfig                               100% 5670   181.6KB/s   00:00
[root@zxmaster1 kalami]#

RBAC 角色绑定

如果之前已经创建,则忽略。

创建角色

role-developer.yaml

[root@zxmaster1 kalami]# cat role-developer.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: developer
  name: role-developer-dev
rules:
- apiGroups: [""]
  resources: ["pods","pods/log","pods/exec"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["extensions", "apps"]
  resources: ["deployments"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
  resources: ["services"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["apps"]
  resources: ["statefulsets"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
  resources: ["persistentvolumeclaims"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get", "list", "watch","create", "update", "patch", "delete"]
- apiGroups: [""]
  resources: ["replicationcontrollers"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["apps"]
  resources: ["replicasets"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["autoscaling"]
  resources: ["horizontalpodautoscalers"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["batch"]
  resources: ["jobs"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["batch"]
  resources: ["cronjobs"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["apps"]
  resources: ["daemonsets"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
  resources: ["events"]
  verbs: ["get", "list", "watch","create", "update", "patch", "delete"]
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["get", "list", "watch","create", "update", "patch", "delete"]
- apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["get", "list", "watch"]

角色绑定

rolevind-developer.yaml

[root@zxmaster1 kalami]# cat rolebind-developer.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: developer-dev-rolebind
  namespace: developer
subjects:
- kind: User
  name: developer
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: role-developer-dev
  apiGroup: rbac.authorization.k8s.io
[root@zxmaster1 kalami]#

kubeadm 升级集群

kubeadm 升级集群官方介绍:Upgrading kubeadm clusters | Kubernetes

升级顺序:

  • 升级主控制平面节点
  • 升级其他控制平面节点
  • 升级工作节点

查看当前集群版本信息

kubeadm version 查看 kubeadm版本信息:

[root@zxmaster1 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"9dd794e454ac32d97cde41ae10be801ae98f75df", GitTreeState:"clean", BuildDate:"2021-03-18T01:07:09Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}

kubectl nodes -o wide 查看集群组件版本信息:

[root@zxmaster1 ~]# kubectl get nodes -o wide
NAME        STATUS   ROLES    AGE    VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
zxmaster1   Ready    master   362d   v1.18.2   192.168.34.10   <none>        CentOS Linux 7 (Core)   5.11.11-1.el7.elrepo.x86_64   docker://19.3.7
zxmaster2   Ready    master   362d   v1.18.2   192.168.34.11   <none>        CentOS Linux 7 (Core)   5.11.11-1.el7.elrepo.x86_64   docker://19.3.7
zxmaster3   Ready    master   362d   v1.18.2   192.168.34.12   <none>        CentOS Linux 7 (Core)   5.11.11-1.el7.elrepo.x86_64   docker://19.3.7
zxworker1   Ready    <none>   362d   v1.18.2   192.168.34.20   <none>        CentOS Linux 7 (Core)   5.11.11-1.el7.elrepo.x86_64   docker://19.3.7
zxworker2   Ready    <none>   362d   v1.18.2   192.168.34.21   <none>        CentOS Linux 7 (Core)   5.11.11-1.el7.elrepo.x86_64   docker://19.3.7
zxworker3   Ready    <none>   287d   v1.18.2   192.168.34.22   <none>        CentOS Linux 7 (Core)   5.11.12-1.el7.elrepo.x86_64   docker://19.3.7

注意事项: k8s 不支持大版本间跳级升级。

[root@zxmaster1 ~]# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[upgrade/config] FATAL: this version of kubeadm only supports deploying clusters with the control plane version >= 1.20.0. Current version: v1.18.2
To see the stack trace of this error execute with --v=5 or higher

查看当前可升级版本信息

yum list --showduplicates kubeadm --disableexcludes=kubernetes
[root@zxmaster1 ~]# yum list --showduplicates kubeadm --disableexcludes=kubernetes
已加载插件:fastestmirror, langpacks
Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache fast
Determining fastest mirrors
 * base: mirrors.aliyun.com
 * elrepo: mirrors.tuna.tsinghua.edu.cn
 * epel: mirrors.tuna.tsinghua.edu.cn
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
已安装的软件包
kubeadm.x86_64                                                                               1.18.2-0                                                                                @kubernetes
可安装的软件包
kubeadm.x86_64                                                                               1.6.0-0                                                                                 kubernetes
kubeadm.x86_64                                                                               1.6.1-0                                                                                 kubernetes
kubeadm.x86_64                                                                               1.6.2-0                                                                                 kubernetes
kubeadm.x86_64                                                                               1.6.3-0                                                                                 kubernetes                                      
. . .                                                                                        . . .    
                                                        . . . 

kubeadm.x86_64                                                                               1.20.2-0                                                                                 kubernetes
kubeadm.x86_64                                                                               1.20.4-0                                                                                 kubernetes
kubeadm.x86_64                                                                               1.20.5-0                                                                                 kubernetes
kubeadm.x86_64                                                                               1.21.0-0                                                                                 kubernetes

升级控制平面节点

控制面节点上的升级过程应该每次处理一个节点。 首先选择一个要先行升级的控制面节点。该节点上必须拥有 /etc/kubernetes/admin.conf 文件。

yum 删除旧的kubeadm软件

如果安装过程中出现下载错误或者安装错误,可以通过 yum remove 命令卸载软件。

yum remove -y kubeadm-1.19.9-0
[root@zxmaster1 ~]# yum remove -y kubeadm-1.19.9-0
已加载插件:fastestmirror, langpacks
正在解决依赖关系
--> 正在检查事务
---> 软件包 kubeadm.x86_64.0.1.19.9-0 将被 删除
--> 解决依赖关系完成

依赖关系解决

============================================================================================================================
 Package                 架构                  版本                   源                              大小
============================================================================================================================
正在删除:
 kubeadm                 x86_64                1.19.9-0              @kubernetes                    37 M

事务概要
============================================================================================================================
移除  1 软件包

安装大小:37 M
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  正在删除    : kubeadm-1.19.9-0.x86_64                                                                                                                                                      1/1
  验证中      : kubeadm-1.19.9-0.x86_64                                                                                                                                                      1/1

删除:
  kubeadm.x86_64 0:1.19.9-0

完毕!
[root@zxmaster1 ~]#

yum 安装kubeadm指定版本软件包

yum install -y kubeadm-1.20.0-0 --disableexcludes=kubernetes
[root@zxmaster1 ~]# yum install -y kubeadm-1.20.0-0 --disableexcludes=kubernetes
已加载插件:fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * elrepo: mirrors.tuna.tsinghua.edu.cn
 * epel: mirror.lzu.edu.cn
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
正在解决依赖关系
--> 正在检查事务
---> 软件包 kubeadm.x86_64.0.1.20.0-0 将被 安装
--> 解决依赖关系完成

依赖关系解决

============================================================================================================================
 Package                 架构                    版本                   源                      大小
============================================================================================================================
正在安装:
 kubeadm                  x86_64                1.20.0-0              kubernetes              8.3 M

事务概要
============================================================================================================================
安装  1 软件包

总下载量:8.3 M
安装大小:37 M
Downloading packages:
91e0f0a3a10ab757acf9611e8b81e1b272d76a5c400544a254d2c34a6ede1c11-kubeadm-1.20.0-0.x86_64.rpm                                                                              | 8.3 MB  00:00:01
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  正在安装    : kubeadm-1.20.0-0.x86_64                                                                                                                                                      1/1
  验证中      : kubeadm-1.20.0-0.x86_64                                                                                                                                                      1/1

已安装:
  kubeadm.x86_64 0:1.20.0-0

完毕!

升级 kubeadm 集群 | Kubernetes

验证升级计划

kubeadm upgrade plan
[root@zxmaster1 ~]# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.18.2
[upgrade/versions] kubeadm version: v1.19.9
I0125 10:31:35.569701   31304 version.go:255] remote version is much newer: v1.23.2; falling back to: stable-1.19
[upgrade/versions] Latest stable version: v1.19.16
[upgrade/versions] Latest stable version: v1.19.16
[upgrade/versions] Latest version in the v1.18 series: v1.18.20
[upgrade/versions] Latest version in the v1.18 series: v1.18.20

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
kubelet     6 x v1.18.2   v1.18.20

Upgrade to the latest version in the v1.18 series:

COMPONENT                 CURRENT   AVAILABLE
kube-apiserver            v1.18.2   v1.18.20
kube-controller-manager   v1.18.2   v1.18.20
kube-scheduler            v1.18.2   v1.18.20
kube-proxy                v1.18.2   v1.18.20
CoreDNS                   1.6.7     1.7.0
etcd                      3.4.3-0   3.4.3-0

You can now apply the upgrade by executing the following command:

        kubeadm upgrade apply v1.18.20

_____________________________________________________________________

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
kubelet     6 x v1.18.2   v1.19.16

Upgrade to the latest stable version:

COMPONENT                 CURRENT   AVAILABLE
kube-apiserver            v1.18.2   v1.19.16
kube-controller-manager   v1.18.2   v1.19.16
kube-scheduler            v1.18.2   v1.19.16
kube-proxy                v1.18.2   v1.19.16
CoreDNS                   1.6.7     1.7.0
etcd                      3.4.3-0   3.4.13-0

You can now apply the upgrade by executing the following command:

        kubeadm upgrade apply v1.19.16

Note: Before you can perform this upgrade, you have to update kubeadm to v1.19.16.

_____________________________________________________________________


The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.

API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io   v1alpha1          v1alpha1            no
kubelet.config.k8s.io     v1beta1           v1beta1             no
_____________________________________________________________________
;