金山云 您所在的位置:网站首页 云文件存储app 金山云

金山云

2024-06-14 06:00| 来源: 网络整理| 查看: 265

您可以在金山云容器服务Kubernetes集群中使用金山云KFS存储卷。

目前,金山云提供两种kubernetes挂载方式:

静态存储卷

可以通过以下两种方式使用KFS文件存储静态存储卷:

直接通过volume使用

通过PV/PVC使用

动态存储卷

前提

挂载文件系统(KFS)的前提是您有创建好的文件系统。如果您还未创建文件系统,您需要先创建文件系统。有关如何创建文件系统的详细信息,参见创建文件系统及挂载点

说明

金山云KFS为共享存储,可以同时为多个 Pod 提供共享存储服务,即一个PVC可以同时被多个Pod 使用。

在没有卸载文件系统前,务必不要删除文件系统的挂载点,否则会造成操作系统Hang。

Flexvolume模式下使用KFS动态存储卷能力,若集群创建时间在2021-2-24及之后的集群可以直接使用,创建时间在2021-2-24之前,需要更新集群中Kube-system命名空间下disk-provisioner组件,YAML见附录

查看文件系统

如上图所示:

server:10.0.1.xx

挂载路径:/cfs-eHhkjGxxxx

静态存储卷

Flexvolume模式

直接通过Volume使用

apiVersion: apps/v1 kind: Deployment metadata: name: kfs spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx volumeMounts: - name: "kfs" mountPath: "/data" volumes: - name: "kfs" nfs: server: "10.0.1.xx" path: "/cfs-eHhkjGxxxx"

CSI模式

通过PV/PVC使用

创建PV:

apiVersion: v1 kind: PersistentVolume metadata: name: kfs-pv spec: storageClassName: "kfs" capacity: storage: 100Mi accessModes: - ReadWriteMany mountOptions: - nfsvers=3 nfs: server: "10.0.1.xx" path: "/cfs-eHhkjGxxxx"

创建PVC:

apiVersion: v1 kind: PersistentVolumeClaim metadata: name: kfs-pvc spec: storageClassName: "kfs" accessModes: - ReadWriteMany resources: requests: storage: 100Mi

创建Deployment:

apiVersion: apps/v1 kind: Deployment metadata: name: kfs-deploy spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx volumeMounts: - name: kfs mountPath: "/data" volumes: - name: kfs persistentVolumeClaim: claimName: kfs-pvc

动态存储卷Flexvolume模式参数说明

参数

说明

server

指定kfs挂载点地址(必填)

path

指定kfs挂载目录 (必填,子目录不存在时可自动创建)

archiveOnDelete

表示删除PVC、PV时候,处理kfs子目录的方式:如果reclaimPolicy为Delete,且archiveOnDelete为false:会直接删除远端目录和数据,请谨慎使用。如果reclaimPolicy为Delete,且archiveOnDelete为true:会将远端的目录更新为其他名字备份。如果reclaimPolicy为Retain,远端的目录不作处理。(选填,默认为false)

storageType

ksc/kfs,指定使用kfs存储(必填)

创建Storageclass:

apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ksc-kfs parameters: server: 10.0.1.xx path: /cfs-eHhkjGxxxx/test-path archiveOnDelete: "false" storageType: ksc/kfs provisioner: ksc/storage

创建StatefulSet:

apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: serviceName: test selector: matchLabels: app: nginx replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx volumeMounts: - name: www mountPath: "/data" volumeClaimTemplates: - metadata: name: www spec: accessModes: - ReadWriteOnce storageClassName: ksc-kfs resources: requests: storage: 10GiCSI模式 说明:仅csi-driver 2.0.0及以上版本支持KFS,如您的组件版本过低,请提交工单联系后台进行组件版本升级。参数说明

参数

说明

server

指定kfs挂载点地址

share

指定kfs挂载目录,同“前提->查看文件系统”中的挂载路径

reclaimPolicy

持久卷删除时的回收策略,目前NFS仅支持Retain策略:当 pvc对象被删除时,pv卷仍然存在,对应的数据卷变为released状态,用户需要手动回收资源。

volumeBindingMode

绑定持久卷的属性

mountOptions

挂载kfs的options参数- vers:指定NFS版本- nolock:禁用本地文件锁定机制- proto:指定NFS挂载时使用的传输协议- noresvport:禁用使用保留端口(Reserved Port)

创建StorageClass:

apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: kfsplugin provisioner: com.ksc.csi.nfsplugin allowVolumeExpansion: false parameters: server: 10.0.1.xx share: /cfs-eHhkjGxxxx reclaimPolicy: Retain volumeBindingMode: Immediate mountOptions: - vers=3 - nolock - proto=tcp - noresvport

创建PVC:

apiVersion: "v1" kind: "PersistentVolumeClaim" metadata: name: "pvc-kfs" namespace: "default" spec: accessModes: - "ReadWriteMany" resources: requests: storage: "100Mi" storageClassName: "kfsplugin"

创建deployment:

apiVersion: apps/v1 kind: Deployment metadata: name: web-server-deployment-kfs spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx volumeMounts: - name: mypvc mountPath: /usr/share/nginx/html volumes: - name: mypvc persistentVolumeClaim: claimName: pvc-kfs resdOnly: false

验证

##使用命令查看pod、pvc状态,当pod状态为Running,pvc状态为Bound时绑定成功 kubectl get pods kubectl get pvc附录

disk-provisioneryaml文件如下:

apiVersion: apps/v1 kind: Deployment metadata: name: disk-provisioner namespace: kube-system spec: selector: matchLabels: app: disk-provisioner replicas: 1 revisionHistoryLimit: 2 template: metadata: labels: app: disk-provisioner spec: dnsPolicy: Default tolerations: # this taint is set by all kubelets running `--cloud-provider=external` - key: "node.cloudprovider.kubernetes.io/uninitialized" value: "true" effect: "NoSchedule" containers: - image: hub.kce.ksyun.com/ksyun/disk-provisioner:latest name: ebs-provisioner env: - name: OPENAPI_ENDPOINT value: "internal.api.ksyun.com" - name: OPENAPI_PREFIX value: "http" volumeMounts: - name: kubeconfig mountPath: /root/.kube/config - name: clusterinfo mountPath: /opt/app-agent/arrangement/clusterinfo - image: hub.kce.ksyun.com/ksyun/disk-provisioner:latest name: ksc-storage-provisioner securityContext: privileged: true # do mount args: - --provisioner=ksc/storage env: - name: OPENAPI_ENDPOINT value: "internal.api.ksyun.com" - name: OPENAPI_PREFIX value: "http" volumeMounts: - name: kubeconfig mountPath: /root/.kube/config - name: clusterinfo mountPath: /opt/app-agent/arrangement/clusterinfo volumes: - name: kubeconfig hostPath: path: /root/.kube/config - name: clusterinfo hostPath: path: /opt/app-agent/arrangement/clusterinfo



【本文地址】

公司简介

联系我们

今日新闻

    推荐新闻

    专题文章
      CopyRight 2018-2019 实验室设备网 版权所有