三主两从一VIP
1.环境准备
部署Dokcer查看:
链接: https://blog.csdn.net/qq_40914472/article/details/140693322
Master 节点:172.16.103.206(已安装 Docker) 172.16.103.192(已安装Docker) 172.16.103.193(已安装Docker)
Node节点:172.16.103.196 和 172.16.103.197 (已安装 Docker)
2.在所有节点上进行以下步骤
1. 更新系统和安装必要的软件包
sudo yum update -y
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
2. 禁用交换分区
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
3. 禁用防火墙和SElinux
sudo systemctl stop firewalld
sudo systemctl disable firewalld
sudo sed -i 's/^SELINUX=.*$/SELINUX=disabled/' /etc/selinux/config
4.系统主机名
master节点:
hostnamectl set-hostname master
hostnamectl set-hostname master2
hostnamectl set-hostname master3
node节点:
hostnamectl set-hostname node1
hostnamectl set-hostname node2
5.设置主机名与IP地址解析
[root@master ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
[root@localhost pki]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.103.100 api.k8s #虚拟VIP
172.16.103.206 master
172.16.103.197 node1
172.16.103.196 node2
172.16.103.192 master2
172.16.103.193 master3
[root@localhost pki]#
6.配置内核转发及网桥过滤
cat >/etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF
sysctl --system
7. 配置 Docker Cgroup 驱动
[root@master ~]# cat /etc/docker/daemon.json
{
"exec-opts":
["native.cgroupdriver=systemd"],
"registry-mirrors": [
"https://docker.registry.cyou",
"https://docker-cf.registry.cyou",
"https://dockercf.jsdelivr.fyi",
"https://docker.jsdelivr.fyi",
"https://dockertest.jsdelivr.fyi",
"https://mirror.aliyuncs.com",
"https://dockerproxy.com",
"https://mirror.baidubce.com",
"https://docker.m.daocloud.io",
"https://docker.nju.edu.cn",
"https://docker.mirrors.sjtug.sjtu.edu.cn",
"https://docker.mirrors.ustc.edu.cn",
"https://mirror.iscas.ac.cn",
"https://dockerhub.icu",
"https://docker.rainbond.cc"
],
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"bip":"172.12.0.1/24",
"storage-driver": "overlay2"
}
8. 添加 Kubernetes 仓库并安装 kubeadm、kubelet 和 kubectl
cat << EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
9. 安装kubeadm、kubelet、kubectl
yum install -y --nogpgcheck kubelet-1.23.12 kubeadm-1.23.12 kubectl-1.23.12
10.设置开机启动
systemctl daemon-reload
systemctl restart docker.service --now
systemctl restart kubelet.service --now
3.在 所有Master 节点 (172.16.103.206、172.16.103.192、172.16.103.193) 上配置高可用进行以下步骤
1.安装keepalived和haproxy
yum install haproxy keepalived -y
2.配置haproxy文件
[root@localhost pki]# cat /etc/haproxy/haproxy.cfg
global
maxconn 2000
ulimit-n 16384
log 127.0.0.1 local0 err
stats timeout 30s
defaults
log global
mode http
option httplog
timeout connect 5000
timeout client 50000
timeout server 50000
timeout http-request 15s
timeout http-keep-alive 15s
frontend monitor-in
bind *:33305
mode http
option httplog
monitor-uri /monitor
frontend k8s-master
bind 0.0.0.0:16443
bind 127.0.0.1:16443
mode tcp
option tcplog
tcp-request inspect-delay 5s
default_backend k8s-master
backend k8s-master
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server master 172.16.103.206:6443 check #替换成自己master节点ip和名称
server master2 172.16.103.192:6443 check #替换成自己master节点ip和名称
server master3 172.16.103.193:6443 check #替换成自己master节点ip和名称
3.配置keepalived文件
[root@localhost pki]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
[email protected]
[email protected]
[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}
#检查指定检查工作状态脚本(根据状态判断是否故障转移)
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state MASTER #主节点上设置为 MASTER 次节点为BACKUP
interface ens192 #主机上的网络名称
virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
priority 100 #优先级,主服务器设置 100
advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.16.103.100/24 # 虚拟VIP
}
track_script {
chk_apiserver
}
}
[root@localhost pki]#
4.编写健康检测脚本
[root@localhost pki]# vim /etc/keepalived/check_apiserver.sh
[root@localhost pki]# chmod +x /etc/keepalived/check_apiserver.sh
[root@localhost pki]# cat /etc/keepalived/check_apiserver.sh
#!/bin/bash
err=0
for k in $(seq 1 3);do
check_code=$(pgrep haproxy)
if [[ $check_code == "" ]]; then
err=$(expr $err + 1)
sleep 1
continue
else
err=0
break
fi
done
if [[ $err != "0" ]]; then
echo "systemctl stop keepalived"
/usr/bin/systemctl stop keepalived
exit 1
else
exit 0
fi
[root@localhost pki]#
5.启动 keepalived 与 haproxy
#生效配置文件
[root@master ~]# systemctl daemon-reload
#启动并设置开机自启haproxy
[root@master ~]# systemctl enable --now haproxy
Created symlink from /etc/systemd/system/multi-user.target.wants/haproxy.service to /usr/lib/systemd/system/haproxy.service.
#启动并设置开机自启keepalived
[root@master ~]# systemctl enable --now keepalived
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.
6.在 Master 节点(172.16.103.206) 查看VIP
#查看IP与vip的IP
[root@master ~]# hostname -I
172.16.103.206 172.16.103.100 192.168.122.1 172.12.0.1 10.244.0.0 10.244.0.1
#测试vip的16443端口是否通
[root@master ~]# telnet 172.16.103.100 16443
Trying 172.16.103.100...
Connected to 172.16.103.100.
Escape character is '^]'.
Connection closed by foreign host.
4.在 Master 节点 (172.16.103.206) 上进行以下步骤
1. 初始化 Kubernetes 集群
1.创建kubeadm.yaml文件
节点创建kubeadm.yaml 配置文件如下:当然,也可以利用命令kubeadm config print init-defaults生成配置文件模板,然后进行修改:
需要自行修改的有:
◎ advertiseAddress #自己的master节点IP
◎ name #自己的master节点的名称
◎ certSANs #vip地址
◎ controlPlaneEndpoint #vip地址
◎ kubernetesVersion #kubernets版本
◎ podSubnet #pod网段
◎ serviceSubnet #service网段
[root@master home]# vim kubeadm.yaml
[root@master home]# cat kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: 7t2weq.bjbawausm0jaxury #初始化集群使用的token
ttl: 24h0m0s #token有效期
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.16.103.206
bindPort: 6443
nodeRegistration: #集群节点的信息
criSocket: /var/run/dockershim.sock
name: master
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
certSANs:
- 172.16.103.100
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 172.16.103.100:16443 #连接apiserver的地址
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.23.12 #与kubernets版本对应
networking:
dnsDomain: cluster.local
podSubnet: 172.16.10.1/18
serviceSubnet: 172.17.0.1/16 #pod,service与宿主机都不在同一个网段
scheduler: {}
[root@master home]#
2.更新配置文件
[root@master home]# kubeadm config migrate --old-config kubeadm.yaml --new-config new.yaml
#将new.yaml文件复制到其他master节点
[root@master home]# scp new.yaml 172.16.103.192:/root
[email protected]'s password:
new.yaml 100% 997 1.7MB/s 00:00
[root@master home]# scp new.yaml 172.16.103.193:/root
[email protected]'s password:
new.yaml 100% 997 44.7KB/s 00:00
3.初始化 Kubernetes 集群
[root@master home]# kubeadm config images list --config new.yaml
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.23.12
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.23.12
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.23.12
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.23.12
registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.1-0
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6
[root@master home]# kubeadm init --config /root/new.yaml --upload-certs
unable to read config from "/root/new.yaml" : open /root/new.yaml: no such file or directory
To see the stack trace of this error execute with --v=5 or higher
[root@master home]# kubeadm init --config new.yaml --upload-certs
[init] Using Kubernetes version: v1.23.12
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 26.1.3. Latest validated version: 20.10
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [172.17.0.1 172.16.103.206 172.16.103.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [172.16.103.206 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [172.16.103.206 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 13.536892 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
0ddea741a09a0257a90f4a2e386a3c4a4d65b7f0635007e4434e932ec81d8e1c
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 7t2weq.bjbawausm0jaxury
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join 172.16.103.100:16443 --token 7t2weq.bjbawausm0jaxury \
--discovery-token-ca-cert-hash sha256:dd55742c3a8c0a68d4cc976a6d9209d85571f260045c708b6612d4b7ac4537e0 \
--control-plane --certificate-key 0ddea741a09a0257a90f4a2e386a3c4a4d65b7f0635007e4434e932ec81d8e1c
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.16.103.100:16443 --token 7t2weq.bjbawausm0jaxury \
--discovery-token-ca-cert-hash sha256:dd55742c3a8c0a68d4cc976a6d9209d85571f260045c708b6612d4b7ac4537e0
4.备注:
成功初始化后,输出会显示二个 kubeadm join 命令:
kubeadm join 172.16.103.100:16443 --token 7t2weq.bjbawausm0jaxury \
--discovery-token-ca-cert-hash sha256:dd55742c3a8c0a68d4cc976a6d9209d85571f260045c708b6612d4b7ac4537e0 \
--control-plane --certificate-key 0ddea741a09a0257a90f4a2e386a3c4a4d65b7f0635007e4434e932ec81d8e1c
命令稍后会用来将master 节点加入集群。
kubeadm join 172.16.103.100:16443 --token 7t2weq.bjbawausm0jaxury \
--discovery-token-ca-cert-hash sha256:dd55742c3a8c0a68d4cc976a6d9209d85571f260045c708b6612d4b7ac4537e0
命令稍后会用来将node 节点加入集群。
2. 配置 kubectl
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
3. 安装网络插件
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
4. 生成kubeadm join
#Token过期后生成新的token:
[root@master home]# kubeadm token create --print-join-command
kubeadm join 172.16.103.100:16443 --token ljoxyd.vo1lrl2v8q4l6bpd --discovery-token-ca-cert-hash sha256:dd55742c3a8c0a68d4cc976a6d9209d85571f260045c708b6612d4b7ac4537e0
#Master需要生成–certificate-key
[root@master home]# kubeadm init phase upload-certs --upload-certs
I0808 23:48:53.013818 3875601 version.go:255] remote version is much newer: v1.30.3; falling back to: stable-1.23
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
0effc4f57a7455c59ce45f9be2086e4530d83ba703b845eea289b17a9d1a08cb
5.在 Node 节点 (172.16.103.196 和 172.16.103.197) 上进行以下步骤
1. 使用 kubeadm join 命令加入集群
使用在 Master 节点初始化集群后显示的 kubeadm join 命令将Node 节点加入集群。例如:
[root@node2 ~]# kubeadm join 172.16.103.100:16443 --token p7buuf.2tx3bwb61h30mmzs --discovery-token-ca-cert-hash sha256:dd55742c3a8c0a68d4cc976a6d9209d85571f260045c708b6612d4b7ac4537e0
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 26.1.3. Latest validated version: 20.10
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
6.在 Master 节点 (172.16.103.192 和 172.16.103.193) 上进行以下步骤
1.使用在 Master 节点初始化集群后显示的 kubeadm join 命令将Master 节点加入集群。例如:
#master节点
[root@localhost pki]# kubeadm join 172.16.103.100:16443 --token 7t2weq.bjbawausm0jaxury \
> --discovery-token-ca-cert-hash sha256:dd55742c3a8c0a68d4cc976a6d9209d85571f260045c708b6612d4b7ac4537e0 \
> --control-plane --certificate-key 0ddea741a09a0257a90f4a2e386a3c4a4d65b7f0635007e4434e932ec81d8e1c
[preflight] Running pre-flight checks
[WARNING FileExisting-tc]: tc not found in system path
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 26.1.3. Latest validated version: 20.10
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master2] and IPs [172.16.103.192 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master2] and IPs [172.16.103.192 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master2] and IPs [172.17.0.1 172.16.103.192 172.16.103.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node master2 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
This node has joined the cluster and a new control plane instance was created:
* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.
To start administering your cluster from this node, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run 'kubectl get nodes' to see this node join the cluster.
1. 配置 kubectl
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
7.验证集群状态
在 Master 节点上,运行以下命令验证所有节点是否已成功加入集群并处于就绪状态:
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 39m v1.23.12
master2 Ready control-plane,master 37m v1.23.12
master3 Ready control-plane,master 31m v1.23.12
node1 Ready <none> 18m v1.23.12
node2 Ready <none> 18m v1.23.12
[root@master ~]#
如果所有节点状态为 Ready,说明高可用集群配置成功。