Bootstrap

【运维知识大神篇】运维界的超神器Kubernetes教程15(数据持久化详解:Persistent Volume+Persistent Volume Claim+Storage Class)

本篇文章将继续给大家带来烧脑内容,PV、PVC、SC的内容,这三个是挂载存储卷相关的资源,通过这三个实现数据存储的解耦。

目录

PV、PVC、SC架构总述

一、持久卷Persistent Volume(简称PV)

二、持久卷声明Persistent Volume Claim(简称PVC)

三、Pod引用PVC

四、删除PVC验证PV的回收策略

五、部署NFS动态存储类


PV、PVC、SC架构总述

传统的数据集挂载方式是直接将Pod的数据挂载至NFS,我们可以引入PV-PVC实现解耦。

当我们有将NFS数据迁移到Ceph的需求时,NFS和Ceph都是存储运维来解决部署,而K8s运维就需要在Pod中接入Ceph,Pod声明PVC使用多少存储,PVC与PV自动关联,PV再挂载到Ceph上,PV也不许与K8s运维来关心,声明了PVC后会自动连通的。

但是存储运维可能不懂PV,因为它属于中间产物,又与ceph关联,又与PVC进行关联,所以我们可以引入SC动态存储类(StorageClass)自动创建PV,所以PVC与SC连接即可。

一、持久卷Persistent Volume(简称PV)

PV 是 Kubernetes 中的一种资源对象,它代表了集群中的一个持久化存储卷。PV 可以是 NFS、iSCSI、本地磁盘等存储介质,而且可以在多个 Pod 之间共享使用。

参考链接:https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming

1、编写PV资源清单

[root@Master231 persistentvolumes]# cat manual-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: koten-linux-pv01
  labels:
    author: koten
spec:
   # 声明PV的访问模式,常用的有"ReadWriteOnce","ReadOnlyMany"和"ReadWriteMany":
   #   ReadWriteOnce:(简称:"RWO")
   #      只允许单个worker节点读写存储卷,但是该节点的多个Pod是可以同时访问该存储卷的。
   #   ReadOnlyMany:(简称:"ROX")
   #      允许多个worker节点进行只读存储卷。
   #   ReadWriteMany:(简称:"RWX")
   #      允许多个worker节点进行读写存储卷。
   #   ReadWriteOncePod:(简称:"RWOP")
   #       该卷可以通过单个Pod以读写方式装入。
   #       如果您想确保整个集群中只有一个pod可以读取或写入PVC,请使用ReadWriteOncePod访问模式。
   #       这仅适用于CSI卷和Kubernetes版本1.22+。
   accessModes:
   - ReadWriteMany
   # 声明存储卷的类型为nfs
   nfs:
     path: /koten/data/kubernetes/pv/linux/pv001
     server: 10.0.0.231
   # 指定存储卷的回收策略,常用的有"Retain"和"Delete"
   #    Retain:
   #       "保留回收"策略允许手动回收资源。
   #       删除PersistentVolumeClaim时,PersistentVolume仍然存在,并且该卷被视为"已释放"。
   #       在管理员手动回收资源之前,使用该策略其他Pod将无法直接使用。
   #    Delete:
   #       对于支持删除回收策略的卷插件,k8s将删除pv及其对应的数据卷数据。
   #    Recycle:
   #       对于"回收利用"策略官方已弃用。相反,推荐的方法是使用动态资源调配。
   #       如果基础卷插件支持,回收回收策略将对卷执行基本清理(rm -rf /thevolume/*),并使其再次可用于新的声明。
   persistentVolumeReclaimPolicy: Retain
   # 声明存储的容量
   capacity:
     storage: 2Gi

---

apiVersion: v1
kind: PersistentVolume
metadata:
  name: koten-linux-pv02
  labels:
    author: koten
spec:
   accessModes:
   - ReadWriteMany
   nfs:
     path: /koten/data/kubernetes/pv/linux/pv002
     server: 10.0.0.231
   persistentVolumeReclaimPolicy: Retain
   capacity:
     storage: 5Gi

---

apiVersion: v1
kind: PersistentVolume
metadata:
  name: koten-linux-pv03
  labels:
    author: koten
spec:
   accessModes:
   - ReadWriteMany
   nfs:
     path: /koten/data/kubernetes/pv/linux/pv003
     server: 10.0.0.231
   persistentVolumeReclaimPolicy: Retain
   capacity:
     storage: 10Gi

2、创建PV

[root@Master231 persistentvolumes]# kubectl apply -f manual-pv.yaml 
persistentvolume/koten-linux-pv01 created
persistentvolume/koten-linux-pv02 created
persistentvolume/koten-linux-pv03 created

3、查看PV资源

[root@Master231 persistentvolumes]# kubectl get pv
NAME               CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
koten-linux-pv01   2Gi        RWX            Retain           Available                                   21s
koten-linux-pv02   5Gi        RWX            Retain           Available                                   21s
koten-linux-pv03   10Gi       RWX            Retain           Available                                   21s

相关字段说明:
		NAME : 
			pv的名称
		CAPACITY : 
			pv的容量
		ACCESS MODES: 
			pv的访问模式
		RECLAIM POLICY:
			pv的回收策略。
		STATUS :
			pv的状态。
		CLAIM:
			pv被哪个pvc使用。
		STORAGEhobby  
			sc的名称。
		REASON   
			pv出错时的原因。
		AGE
			创建的时间。

4、创建PVC对于的nfs挂载路径

[root@Master231 persistentvolumes]# mkdir -pv /koten/data/kubernetes/pv/linux/pv00{1..3}

二、持久卷声明Persistent Volume Claim(简称PVC)

1、编写PVC的资源清单

[root@Master231 persistentvolumes]# cat manual-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: koten-linux-pvc
spec:
  # 声明资源的访问模式
  accessModes:
  - ReadWriteMany
  # 声明资源的使用量
  resources:
    limits:
       storage: 4Gi
    requests:
       storage: 3Gi

# 结合上面的Pod可以得知会去第二个Pod进行挂载,因为最大限制最接近

2、创建资源

[root@Master231 persistentvolumes]# kubectl apply -f manual-pvc.yaml 
persistentvolumeclaim/koten-linux-pvc created

3、查看PVC资源

[root@Master231 persistentvolumes]# kubectl get pv,pvc
NAME                                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                     STORAGECLASS   REASON   AGE
persistentvolume/koten-linux-pv01   2Gi        RWX            Retain           Available                                                     6m57s
persistentvolume/koten-linux-pv02   5Gi        RWX            Retain           Bound       default/koten-linux-pvc                           6m57s
persistentvolume/koten-linux-pv03   10Gi       RWX            Retain           Available                                                     6m57s

NAME                                    STATUS   VOLUME             CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/koten-linux-pvc   Bound    koten-linux-pv02   5Gi        RWX                           23s

三、Pod引用PVC

1、编写deployment的资源清单

[root@Master231 persistentvolumes]# cat deploy-nginx-pvc.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: koten-nginx-pvc
spec:
  replicas: 2
  selector:
    matchExpressions:
    - key: apps
      values: 
      - "nginx"
      operator: In
  template:
    metadata:
      labels:
        apps: nginx
    spec:
      volumes:
      - name: data
        # 声明是一个PVC类型
        persistentVolumeClaim:
          # 引用哪个PVC
          claimName: koten-linux-pvc
      containers:
      - name: web
        image: harbor.koten.com/koten-web/nginx:1.25.1-alpine
        volumeMounts:
        - name: data
          mountPath: /usr/share/nginx/html

---

apiVersion: v1
kind: Service
metadata:
  name: koten-linux-nginx
spec:
  type: NodePort
  selector:
    apps: nginx
  ports:
  - port: 80
    targetPort: 80
    nodePort: 30080

2、运行deploy资源清单,生成pod和svc

[root@Master231 persistentvolumes]# kubectl apply -f deploy-nginx-pvc.yaml 
deployment.apps/koten-nginx-pvc created
service/koten-linux-nginx created

[root@Master231 persistentvolumes]# kubectl get pods,svc
NAME                                   READY   STATUS    RESTARTS   AGE
pod/koten-nginx-pvc-78c69cd9fc-l2mbc   1/1     Running   0          58s
pod/koten-nginx-pvc-78c69cd9fc-nxqn5   1/1     Running   0          58s

NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
service/koten-linux-nginx   NodePort    10.200.38.44   <none>        80:30080/TCP   58s
service/kubernetes          ClusterIP   10.200.0.1     <none>        443/TCP        9m51s

3、在Pod中写入数据,其实是将数据写入了上面的存储卷

[root@Master231 persistentvolumes]# kubectl exec -it koten-nginx-pvc-78c69cd9fc-l2mbc -- sh
/ # echo "<h1>www.koten.vip</h1>" > /usr/share/nginx/html/index.html
/ # 
[root@Master231 persistentvolumes]# 

4、编写脚本,查看Pod的底层使用的存储卷类型

[root@Master231 persistentvolumes]# cat /tmp/get-pods-pv-soursh.sh
#!/bin/bash

POD_NAME=${1:-koten-nginx-pvc-78c69cd9fc-l2mbc}
PVC_NAME=`kubectl describe pods ${POD_NAME}  | awk '/ClaimName/{print $2}'`
PV_NAME=`kubectl get pvc ${PVC_NAME} | awk 'NR==2{print $3}'`

#kubectl describe pv ${PV_NAME}  | sed -n '/Source/,/ReadOnly/p'
#kubectl describe pv ${PV_NAME}  | grep Source -A 4
kubectl describe pv ${PV_NAME}  | awk '/Source/,/ReadOnly/'

[root@Master231 persistentvolumes]# kubectl get pods 
NAME                               READY   STATUS    RESTARTS   AGE
koten-nginx-pvc-78c69cd9fc-l2mbc   1/1     Running   0          34m
koten-nginx-pvc-78c69cd9fc-nxqn5   1/1     Running   0          34m
[root@Master231 persistentvolumes]# bash /tmp/get-pods-pv-soursh.sh koten-nginx-pvc-78c69cd9fc-l2mbc
Source:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    10.0.0.231
    Path:      /koten/data/kubernetes/pv/linux/pv002
    ReadOnly:  false
[root@Master231 persistentvolumes]# bash /tmp/get-pods-pv-soursh.sh koten-nginx-pvc-78c69cd9fc-nxqn5 
Source:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    10.0.0.231
    Path:      /koten/data/kubernetes/pv/linux/pv002
    ReadOnly:  false

5、由于是两个节点,一直刷新都显示这个界面也可以说明是挂载成功了

四、删除PVC验证PV的回收策略

参考链接:Change the Reclaim Policy of a PersistentVolume | Kubernetes

PV回收策略有三种,Retain,Delete,Recycle

Retain
"保留回收"策略允许手动回收资源,删除pvc时,pv仍然存在,并且该卷被视为"已释放(Released)"。
在管理员手动回收资源之前,使用该策略其他Pod将无法直接使用。
温馨提示:
(1)在k8s1.15.12版本测试时,删除pvc发现nfs存储卷的数据并不会被删除,pv也不会被删除;
       
Delete
对于支持删除回收策略的卷插件,k8s将删除pv及其对应的数据卷数据。建议使用动态存储类(sc)实现,才能看到效果哟!
对于AWS EBS, GCE PD, Azure Disk, or OpenStack Cinder等存储卷会被删除。
温馨提示:
(1)在k8s1.15.12版本测试时,在不使用sc时,则删除pvc发现nfs存储卷的数据并不会被删除;
(2)在k8s1.15.12版本测试时,在使用sc后,可以看到删除效果哟;

Recycle
对于"回收利用"策略官方已弃用。相反,推荐的方法是使用动态资源调配。而动态存储类已经不支持该类型!
如果基础卷插件支持,回收回收策略将对卷执行基本清理(rm -rf /thevolume/*),并使其再次可用于新的声明。
温馨提示,在k8s1.15.12版本测试时,删除pvc发现nfs存储卷的数据被删除。

临时更改PV的回收策略

[root@Master231 persistentvolumes]# kubectl get pv
NAME               CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                     STORAGECLASS   REASON   AGE
koten-linux-pv01   2Gi        RWX            Retain           Available                                                     54m
koten-linux-pv02   5Gi        RWX            Retain           Bound       default/koten-linux-pvc                           54m
koten-linux-pv03   10Gi       RWX            Retain           Available                                                     54m
[root@Master231 persistentvolumes]# 
[root@Master231 persistentvolumes]# kubectl patch pv koten-linux-pv03  -p '{"spec":{"persistentVolumeReclaimPolicy":"Recycle"}}'
persistentvolume/koten-linux-pv03 patched
[root@Master231 persistentvolumes]# kubectl get pv
NAME               CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                     STORAGECLASS   REASON   AGE
koten-linux-pv01   2Gi        RWX            Retain           Available                                                     54m
koten-linux-pv02   5Gi        RWX            Retain           Bound       default/koten-linux-pvc                           54m
koten-linux-pv03   10Gi       RWX            Recycle          Available                                                     54m
[root@Master231 persistentvolumes]# 

温馨提示:基于命令行的方式修改配置,基本上都是临时修改,当资源被删除后,重新创建时依旧会根据资源清单的配置创建。

五、部署NFS动态存储类

当我们使用手动创建PV的时候,会有资源浪费的情况,原本需要4G的PV却使用了5G的PV;还有就是PV作为K8s运维和存储运维的中间产物,谁去做这件事情不好划分。所以我们使用动态存储类自动创建PV。

1、K8s组件原生并不支持NFS

参考链接:https://kubernetes.io/docs/concepts/storage/storage-classes/#provisioner

2、修改api-server属性,添加  --feature-gates=RemoveSelfLink  参数

[root@Master231 storageclasses]# cat /etc/kubernetes/manifests/kube-apiserver.yaml
......
spec:
  containers:
  - command:
    - kube-apiserver
    - --service-node-port-range=3000-50000
    - --feature-gates=RemoveSelfLink=false    # 添加此行
......

3、编写sc资源清单,可以定义动态存储类,sc生成pv

[root@Master231 storageclasses]# cat class.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
# provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
provisioner: koten/linux
parameters:
  # archiveOnDelete: "false"
  archiveOnDelete: "true"

4、编写deployment资源清单,作用是资源适配器,StorageClass 资源适配器是一种 Kubernetes 插件,它负责将 StorageClass 中定义的 PV 属性转换为实际的存储配置,并动态地创建和管理 PV。当创建 PVC 时,Kubernetes 调用 StorageClass 资源适配器来创建 PV,并将其分配给 PVC。

[root@Master231 storageclasses]# cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: harbor.koten.com/koten-tools/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              # value: fuseim.pri/ifs
              value: koten/linux
            - name: NFS_SERVER
              value: 10.0.0.231
            - name: NFS_PATH
              value: /koten/data/kubernetes/sc
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.0.0.231
            # path: /ifs/kubernetes
            path: /koten/data/kubernetes/sc

5、编写授权资源清单

[root@Master231 storageclasses]# cat rbac.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storagehobbyes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

6、编写PVC资源清单,用于声明Pod所需要的资源和连通SC动态存储类

[root@Master231 storageclasses]# cat test-claim.yaml 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
  annotations:
    volume.beta.kubernetes.io/storage-hobby: "managed-nfs-storage"
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi

7、编写Pod资源清单,可以声明PVC

[root@Master231 storageclasses]# cat test-pod.yaml
kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: harbor.koten.com/koten-linux/alpine:latest
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim

8、nfs服务器创建sc需要共享路径

[root@Master231 storageclasses]# mkdir -pv /koten/data/kubernetes/sc

9、创建资源

[root@Master231 storageclasses]# kubectl apply -f .
storageclass.storage.k8s.io/managed-nfs-storage created
deployment.apps/nfs-client-provisioner created
serviceaccount/nfs-client-provisioner unchanged
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner unchanged
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner unchanged
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner unchanged
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner unchanged
persistentvolumeclaim/test-claim unchanged
pod/test-pod created

10、查看是否部署成功

[root@Master231 storageclasses]# kubectl get pods
NAME                                      READY   STATUS      RESTARTS   AGE
nfs-client-provisioner-5ff76b5bb5-9xdd2   1/1     Running     0          21s
test-pod                                  0/1     Completed   0          21s
[root@Master231 storageclasses]# bash /tmp/get-pods-pv-soursh.sh test-pod
Source:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    10.0.0.231
    Path:      /koten/data/kubernetes/pv/linux/pv001
    ReadOnly:  false
[root@Master231 storageclasses]# ll /koten/data/kubernetes/pv/linux/pv001
total 0
-rw-r--r-- 1 root root 0 Jun 24 21:16 SUCCESS

11、在生产环境中建议设置回收策略为保留(Retain)

[root@Master231 storageclasses]# cat class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
# provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
provisioner: koten/linux
# parameters:
  # 注意哈,仅对"reclaimPolicy: Delete"时生效,如果回收策略是"reclaimPolicy: Retain",则无视此参数!
  # 如果设置为false,删除数据后,不会在存储卷路径创建"archived-*"前缀的目录哟!
  # archiveOnDelete: "false"
  # 如果设置为true,删除数据后,会在存储卷路径创建"archived-*"前缀的目录哟
#  archiveOnDelete: "true"
# 声明PV回收策略,默认值为Delete
reclaimPolicy: Retain

我是koten,10年运维经验,持续分享运维干货,感谢大家的阅读和关注!

;