K8S部署文档
一、Docker-CE安装及配置
1.1 docker 简介
1.1.1docker介绍
docker是什么 ?
Docker 是一个开源的应用容器引擎,是直接运行在宿主操作系统之上的一个容器,使用沙箱机制完全虚拟出一个完整的操作,容器之间不会有任何接口,从而让容器与宿主机之间、容器与容器之间隔离的更加彻底。每个容器会有自己的权限管理,独立的网络与存储栈,及自己的资源管理能,使同一台宿主机上可以友好的共存多个容器
docker与虚拟机对比
如果物理机是一幢住宅楼,虚拟机就是大楼中的一个个套间,而容器技术就是套间里的一个个隔断。
1.2配置国内源
1.2.1基础准备
1 .Docker 要求 Ubuntu 系统的内核版本高于 3.10 ,查看本页面的前提条件来验证你的 Ubuntu 版本是否支持 Docker。
uname -r
4.18.0-21-generic(主版本必须保持一致)
2 安装curl
apt-get update && apt-get install -y curl telnet wget man \
apt-transport-https \
ca-certificates \
software-properties-common vim
3 查看新版本号
- Ubuntu 18.10
$ lsb_release -c
Codename: cosmic
4 查看确认国内源
$ cp /etc/apt/sources.list /etc/apt/sources.list.bak
$ cat /etc/apt/sources.list
1.2.2 手动安装Docker(离线安装)
- 下载
docker-ce_18.06.1\~ce\~3-0\~ubuntu_amd64.deb
- 上传到上述文件到待安装服务器
master
- 登录待安装服务器,切换到root账户
dpkg -i docker-ce_18.06.1\~ce\~3-0\~ubuntu_amd64.deb
如果提示错误
dpkg: error: dpkg frontend is locked by another process
说明已经有其他进程在使用dpkg安装程序
sudo rm /var/lib/dpkg/lock
即可。
如果提示错误
itcast@master:~/package$ sudo dpkg -i docker-ce_18.06.1~ce~3-0~ubuntu_amd64.deb
[sudo] password for itcast:
Selecting previously unselected package docker-ce.
(Reading database ... 100647 files and directories currently installed.)
Preparing to unpack docker-ce_18.06.1~ce~3-0~ubuntu_amd64.deb ...
Unpacking docker-ce (18.06.1~ce~3-0~ubuntu) ...
dpkg: dependency problems prevent configuration of docker-ce:
docker-ce depends on libltdl7 (>= 2.4.6); however:
Package libltdl7 is not installed.
dpkg: error processing package docker-ce (--install):
dependency problems - leaving unconfigured
Processing triggers for man-db (2.8.4-2) ...
Processing triggers for systemd (239-7ubuntu10) ...
Errors were encountered while processing:
docker-ce
表示当前docker-ce 依赖系统libltd17库,安装就可以了
$ apt-get install -y libltdl7
5 docker version
$ sudo docker version
[sudo] password for itcast:
Client:
Version: 18.06.1-ce
API version: 1.38
Go version: go1.10.3
Git commit: e68fc7a
Built: Tue Aug 21 17:24:56 2018
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 18.06.1-ce
API version: 1.38 (minimum version 1.12)
Go version: go1.10.3
Git commit: e68fc7a
Built: Tue Aug 21 17:23:21 2018
OS/Arch: linux/amd64
Experimental: false
确保版本号是18.06
1.3 启动docker-ce
1 开机并启动docker
sudo systemctl enable docker
sudo systemctl start docker
2 重启,登录确认docker
已经运行
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3 下载Alpine
镜像热身一下 Docker
~$ sudo docker run -it --rm alpine:latest sh
输出内容如下,我们在Docker
容器中测试三个命令,分别是
- `date`
- `time`
- `uname -r`
2.4 创建docker 用户组并添加当前用户
使用您的用户登录Linux然后执行如下操作,用户组docker可能已经存在。
如果使用普通用户目前是无法使用docker指令的
itcast@master:~$ docker ps
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.38/containers/json: dial unix /var/run/docker.sock: connect: permission denied
itcast@master:~$
我们需要将当前的普通用户添加到当前的docker用户组中
sudo groupadd docker
sudo usermod -aG docker $USER
exit
重新登录使用普通用户登录:
itcast@master:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
就可以使用了。
2.4 申请阿里云镜像加速器
如果不申请阿里云私人专属镜像加速器,鼓励复制如下本人申请的私人专属镜像加速器,直接使用即可。
https://ozcouv1b.mirror.aliyuncs.com
申请步骤如下
在阿里云注册自己账户
找到容器镜像服务,参考网址如下
https://cr.console.aliyun.com/cn-hangzhou/instances/mirrors
点开左侧菜单镜像中心—>镜像加速器
右侧加速器地址,即使私人专属的镜像加速器地址,点击复制
粘贴到一个文本文件留存
2.6 docker 配置国内镜像加速器
目的 : 为了下载docker镜像更快
您可以通过修改daemon
配置文件/etc/docker/daemon.json
来使用加速器。
创建/etc/docker/daemon.json
文件,内容如下:
{
"registry-mirrors": ["https://ozcouv1b.mirror.aliyuncs.com"]
}
重启docker服务
# 重载所有修改过的配置文件
sudo systemctl daemon-reload
# 重启Docker服务
sudo systemctl restart docker
二、Kubernetes 安装及部署
环境说明
master -----node1—node2
网络连接环境
master :192.168.236.177/24 gw.2
Node1 : 192.168.236.178/24 gw.2
node2: 192.168.236.179/24 gw.2
2.1 k8s安装环境准备
2.1.1配置并安装k8s国内源
1 创建配置文件sudo touch /etc/apt/sources.list.d/kubernetes.list
2 添加写权限
itcast@master:~$ sudo chmod 666 /etc/apt/sources.list.d/kubernetes.list
再添加,内容如下:
deb http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial main
3 执行sudo apt update
更新操作系统源,开始会遇见如下错误
tcast@master:~$ sudo apt update
Get:1 http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial InRelease [8,993 B]
Err:1 http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial InRelease
The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB
Hit:2 http://mirrors.aliyun.com/ubuntu cosmic InRelease
Hit:3 http://mirrors.aliyun.com/ubuntu cosmic-updates InRelease
Hit:4 http://mirrors.aliyun.com/ubuntu cosmic-backports InRelease
Hit:5 http://mirrors.aliyun.com/ubuntu cosmic-security InRelease
Err:6 https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu cosmic InRelease
Could not wait for server fd - select (11: Resource temporarily unavailable) [IP: 202.141.176.110 443]
Reading package lists... Done
W: GPG error: http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB
E: The repository 'http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial InRelease' is not signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
其中:
The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB
签名认证失败,需要重新生成。记住上面的NO_PUBKEY 6A030B21BA07F4FB
4 添加认证key
运行如下命令,添加错误中对应的key(错误中NO_PUBKEY后面的key的后8位)
gpg --keyserver keyserver.ubuntu.com --recv-keys BA07F4FB
接着运行如下命令,确认看到OK,说明成功,之后进行安装:
gpg --export --armor BA07F4FB | sudo apt-key add -
5 再次重新sudo apt update
更新系统下载源数据列表 遇到如下错误
itcast@master:/etc/apt/sources.list.d$ sudo apt-get update
Get:1 http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial InRelease [9,383 B]
Hit:2 https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu cosmic InRelease
Ign:3 http://mirrors.aliyun.com/ubuntu cosmic InRelease
Ign:4 http://mirrors.aliyun.com/ubuntu cosmic-updates InRelease
Ign:5 http://mirrors.aliyun.com/ubuntu cosmic-backports InRelease
Ign:6 http://mirrors.aliyun.com/ubuntu cosmic-security InRelease
Err:7 http://mirrors.aliyun.com/ubuntu cosmic Release
404 Not Found [IP: 116.211.222.248 80]
Err:8 http://mirrors.aliyun.com/ubuntu cosmic-updates Release
404 Not Found [IP: 116.211.222.248 80]
Err:9 http://mirrors.aliyun.com/ubuntu cosmic-backports Release
404 Not Found [IP: 116.211.222.248 80]
Err:10 http://mirrors.aliyun.com/ubuntu cosmic-security Release
404 Not Found [IP: 116.211.222.248 80]
Reading package lists... Done
E: The repository 'http://mirrors.aliyun.com/ubuntu cosmic Release' no longer has a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
E: The repository 'http://mirrors.aliyun.com/ubuntu cosmic-updates Release' no longer has a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
E: The repository 'http://mirrors.aliyun.com/ubuntu cosmic-backports Release' no longer has a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
E: The repository 'http://mirrors.aliyun.com/ubuntu cosmic-security Release' no longer has a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
E: The repository 'http://mirrors.aliyun.com/ubuntu cosmic-security Release' no longer has a Release file.
接下来修改了yum源
itcast@master:/etc/apt$ sudo vim sources.list
deb http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
itcast@master:/etc/apt$
后再次执行sudo apt update
itcast@master:/etc/apt$ sudo apt update
Get:1 http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial InRelease [9,383 B]
Hit:2 http://mirrors.aliyun.com/ubuntu bionic InRelease
Hit:3 http://mirrors.aliyun.com/ubuntu bionic-security InRelease
Hit:4 http://mirrors.aliyun.com/ubuntu bionic-updates InRelease
Hit:5 http://mirrors.aliyun.com/ubuntu bionic-backports InRelease
Hit:6 http://mirrors.aliyun.com/ubuntu bionic-proposed InRelease
Fetched 9,383 B in 1s (14.2 kB/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done
67 packages can be upgraded. Run 'apt list --upgradable' to see them.
以上没有报和错误异常,表示成功。
2.1.2 禁止基础设施
1 禁止防火墙
$ sudo ufw disable
Firewall stopped and disabled on system startup
2 关闭swap
# 成功
$ sudo swapoff -a
# 永久关闭swap分区
$ sudo sed -i 's/.*swap.*/#&/' /etc/fstab
3 禁止selinux
# 安装操控selinux的命令
$ sudo apt install -y selinux-utils
# 禁止selinux
$ setenforce 0
# 重启操作系统
$ shutdown -r now
# 查看selinux是否已经关闭
$ sudo getenforce
Disabled(表示已经关闭)
2.2 k8s系统网络配置
创建/etc/sysctl.d/k8s.conf
文件
添加内容如下:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness = 0
(2) 执行命令使修改生效
# 【候选】建议执行下面的命令
$ sudo modprobe br_netfilter
$ sudo sysctl -p /etc/sysctl.d/k8s.conf
2.3 安装k8s
注意: 切换到root用户 $ su
1 安装Kubernetes 目前安装版本 v1.13.1
$ apt update && apt-get install -y kubelet=1.13.1-00 kubernetes-cni=0.6.0-00 kubeadm=1.13.1-00 kubectl=1.13.1-00
2 设置为开机重启
$ sudo systemctl enable kubelet && systemctl start kubelet
$ sudo shutdown -r now
2.4 验证k8s
1 使用root 用户登录master主机
2 执行如下命令
kubectl get nodes
输出如下
root@master:/home/itcast# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
3 查看当前k8s版本
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:39:04Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
三、创建企业Kubernetes多主机集群环境
3.1创建两个节点(两个虚拟机)
1 在VMWare中创建完整克隆,分别命名为UbuntuNode1
和UbuntuNode2
分别对两个完整克隆的虚拟机进行如下操作,修改主机名称和静态IP
- 使用root用户登录
- 打开配置文件
vim /etc/cloud/cloud.cfg
- 修改配置
preserve_hostname: true
4 .修改/etc/hostname
,只有一行 node1
或node2
3.2 master和node基础配置
3.2.1 ip地址配置
master
/etc/netplan/50-cloud-init.yaml
network:
ethernets:
ens33:
addresses: [192.168.236.177/24]
dhcp4: false
gateway4: 192.168.236.2
nameservers:
addresses: [192.168.236.2]
optional: true
version: 2
重启IP配置
netplan apply
Node 1
/etc/netplan/50-cloud-init.yaml
network:
ethernets:
ens33:
addresses: [192.168.236.178/24]
dhcp4: false
gateway4: 192.168.236.2
nameservers:
addresses: [192.168.236.2]
optional: true
version: 2
重启IP配置
netplan apply
node2
/etc/netplan/50-cloud-init.yaml
network:
ethernets:
ens33:
addresses: [192.168.236.179/24]
dhcp4: false
gateway4: 192.168.236.2
nameservers:
addresses: [192.168.236.2]
optional: true
version: 2
重启ip 配置
netplan apply
3.2.2修改hosts文件
注意: (Master、Node1、Node2都需要配置)
使用root用户登录
- 打开hosts文件
vim /etc/hosts
- 输入如下内容
192.168.236.177 master
192.168.236.178 node1
192.168.236.179 node2
3.2.3修改主机名称
master
打开hostname配置文件sudo vim /etc/hostname
,修改如下内容
master
Node 1
打开hostname配置文件sudo vim /etc/hostname
,修改如下内容
node1
Node 1
打开hostname配置文件sudo vim /etc/hostname
,修改如下内容
node2
所有节点修改完毕后 重启机器shutdown -r now
3.3 配置Master节点
3.3.1 创建工作目录
$ mkdir /home/itcast/working
$ cd /home/itcast/working/
3.3.2创建kubeadm.conf配置文件
- 创建k8s的管理工具
kubeadm
对应的配置文件,候选操作在home/itcast/working/
目录下
使用kubeadm配置文件,通过在配置文件中指定docker仓库地址,便于内网快速部署。
生成配置文件
kubeadm config print init-defaults ClusterConfiguration > kubeadm.conf
2 .修改kubeadm.conf
中的如下两项:
imageRepository
kubernetesVersion
vi kubeadm.conf
# 修改 imageRepository: k8s.gcr.io
# 改为 registry.cn-beijing.aliyuncs.com/imcto
imageRepository: registry.cn-beijing.aliyuncs.com/imcto
# 修改kubernetes版本kubernetesVersion: v1.13.0
# 改为kubernetesVersion: v1.13.1
kubernetesVersion: v1.13.1
3 修改kubeadm.conf
中的API服务器地址,后面会频繁使用这个地址。
localAPIEndpoint:
localAPIEndpoint:
advertiseAddress: 192.168.236.177
bindPort: 6443
注意: 192.168.236.177
是master主机的ip地址
4 配置子网网络
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
这里的10.244.0.0/16
和 10.96.0.0/12
分别是k8s内部pods和services的子网网络,最好使用这个地址,后续flannel网络需要用到。
3.3.3拉取k8s必备的模块镜像
1 查看一下都需要哪些镜像文件需要拉取
itcast@master:~/working$ kubeadm config images list --config kubeadm.conf
registry.cn-beijing.aliyuncs.com/imcto/kube-apiserver:v1.13.1
registry.cn-beijing.aliyuncs.com/imcto/kube-controller-manager:v1.13.1
registry.cn-beijing.aliyuncs.com/imcto/kube-scheduler:v1.13.1
registry.cn-beijing.aliyuncs.com/imcto/kube-proxy:v1.13.1
registry.cn-beijing.aliyuncs.com/imcto/pause:3.1
registry.cn-beijing.aliyuncs.com/imcto/etcd:3.2.24
registry.cn-beijing.aliyuncs.com/imcto/coredns:1.2.6
2 拉取镜像
#下载全部当前版本的k8s所关联的镜像
kubeadm config images pull --config ./kubeadm.conf
3.3.4初始化kubernetes环境
#初始化并且启动
$ sudo kubeadm init --config ./kubeadm.conf
更多kubeadm配置文件参数详见
kubeadm config print-defaults
k8s启动成功输出内容较多,但是记住末尾的内容
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.236.177:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:1cb486184af849ae88b43ed6196ebc2b8491fbfdec3c4bb4016f2d761e775f27
按照官方提示,执行以下操作。
1 执行如下命令
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
2 创建系统服务并启动
# 启动kubelet 设置为开机自启动
$ sudo systemctl enable kubelet
# 启动k8s服务程序
$ sudo systemctl start kubelet
3.3.5 验证kubernetes启动结果
1 验证输入,注意显示master状态是NotReady
,证明初始化服务器成功
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady master 2m59s v1.13.1
2 查看当前k8s集群状态
$ kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
目前只有一个master,还没有node,而且是NotReady状态,那么我们需要将node加入到master管理的集群中来。在加入之前,我们需要先配置k8s集群的内部通信网络,这里采用的是flannel网络。
3.3.6 部署集群内部通信flannel网络
$cd $HOME/working
此处因为网络问题无法访问mac开启终端代理下载后将yml文件传给master服务器
$wget https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
$ export ALL_PROXY=socks5://127.0.0.1:7891
$ curl https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml >>kube-flannel.yml
$ scp kube-flannel.yml [email protected]:/home/itcast/working
编辑这个文件,确保flannel网络是对的,找到net-conf.json
标记的内容是否正确。
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
这个"10.244.0.0/16"和 ./kubeadm.conf中的podsubnet的地址要一致。
应用当前flannel配置文件
itcast@master:~/working$ sudo vi kube-flannel.yml
输出结果如下
itcast@master:~/working$ kubectl apply -f kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
itcast@master:~/working$
安装flannel网络前 执行kubectl get nodes
输出结果如下
itcast@master:~/working$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady master 2m59s v1.13.1
安装flannel网络后 执行kubectl get nodes
输出结果如下
itcast@master:~/working$ kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready master 58m v1.13.1
此时master已经是Ready
状态了,表示已经配置成功了,那么我们就需要配置node来加入这个集群。
3.4配置Node
3.4.1确认外部环境
1 确认关闭swap
apt install -y selinux-utils
swapoff -a
2 禁止selinux
setenforce 0
3 确认关闭防火墙
ufw disable
3.5 配置k8s集群的Node主机环境
1 启动k8s后台服务
# 启动kubelet 设置为开机自启动
$ sudo systemctl enable kubelet
# 启动k8s服务程序
$ sudo systemctl start kubelet
2 将master机器的/etc/kubernetes/admin.conf
传到到node1和node2
登录master
终端
#将admin.conf传递给node1
sudo scp /etc/kubernetes/admin.conf [email protected]:/home/itcast/
#将admin.conf传递给node2
sudo scp /etc/kubernetes/admin.conf [email protected]:/home/itcast/
3 登录node1
终端,创建基础kube配置文件环境
$ mkdir -p $HOME/.kube
$ sudo cp -i $HOME/admin.coenf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
4 登录node2
终端,创建基础kube配置文件环境
$ mkdir -p $HOME/.kube
$ sudo cp -i $HOME/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
5 node1
和node2
分别连接master
加入master集群。这里用的是kubeadm join
指令
kubeadm join 192.168.236.177:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:1cb486184af849ae88b43ed6196ebc2b8491fbfdec3c4bb4016f2d761e775f27
这里要注意,使用的hash应该是master
主机 kubeadm init
成功之后生成的hash码。
- 应用两个node主机分别应用flannel网络
将master
中的kube-flannel.yml
分别传递给两个node
节点.
#将kube-flannel.yml传递给node1
sudo scp $HOME/working/kube-flannel.yml [email protected]:/home/itcast/
#将kube-flannel.yml传递给node2
sudo scp $HOME/working/kube-flannel.yml [email protected]:/home/itcast/
分别启动flannel
网络
itcast@node1:~$ kubectl apply -f kube-flannel.yml
itcast@node2:~$ kubectl apply -f kube-flannel.yml
7 查看node是否已经加入到k8s集群中(需要等一段时间才能ready)
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 9h v1.13.1
node1 Ready <none> 6h54m v1.13.1
node2 Ready <none> 6h53m v1.13.1
中间出现过NODE创建flannel失败的情况,最后通过重启node节点解决
三、应用实例
3.1 创建Mysql实例
3.1.1 定义描述文件
apiVersion: v1
kind: ReplicationController #副本控制器RC
metadata:
name: mysql #RC的名称,全局唯一
spec:
replicas: 1 #Pod副本的期待数量
selector:
app: mysql #符合目标的Pod拥有此标签
template: #根据此模板创建Pod的副本(实例)
metadata:
labels:
app: mysql #Pod副本拥有的标签,对应RC的Selector
spec:
containers: #Pod内容器的定义部分
- name: mysql #容器的名称
image: hub.c.163.com/library/mysql #容器对应的Docker image
ports:
- containerPort: 3306 #容器应用监听的端口号
env: #注入容器内的环境变量
- name: MYSQL_ROOT_PASSWORD
value: "123456"
此处注意用nano打开后再粘贴,要不然会出现很多空格
3.1.2 加载ReplicationController副本控制器描述文件
创建好mysql-rc.yaml后,在master节点使用kubectl命令将它发布到k8s集群中。
kubectl create -f mysql-rc.yaml
查看状态
itcast@master:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mysql-22ddp 1/1 Running 0 37m
根据
https://www.bilibili.com/video/BV1Fx41197hp?t=3711
自己实践的部署