飞道的博客

银河麒麟高级服务器操作系统V10上k8s部署集成GlusterFS、Heketi

1025人阅读  评论(0)

前言

本文介绍基于银河麒麟高级服务器操作系统V10已安装部署的k8s单机集群上部署GlusterFS、Heketi

本文涉及部署脚本主要基于gluster官方项目https://github.com/gluster/gluster-kubernetes在arm64上的迁移适配项目https://github.com/hknarutofk/gluster-kubernetes

 

前置条件

银河麒麟高级服务器操作系统V10上安装k8s单机集群: https://blog.csdn.net/m0_46573967/article/details/112935319

 

一、准备一个块独立的硬盘

在长城云上为银河麒麟高级服务器V10所在虚拟机分配一个独立的云盘。其他虚拟化平台类似,物理机则直接插到硬盘接口上。

新增的云盘会被内核自动识别

二、下载gluster-kubernetes脚本

切换到root用户,下载脚本


  
  1. [ yeqiang@192-168-110-185 桌面] $ sudo su
  2. [ root@192-168-110-185 桌面] # cd ~
  3. [ root@192-168-110-185 ~] # git clone --depth=1 https://github.com/hknarutofk/gluster-kubernetes.git
  4. 正克隆到 'gluster-kubernetes' ...
  5. remote: Enumerating objects: 157 , done.
  6. remote: Counting objects: 100 % (157/157), done.
  7. remote: Compressing objects: 100 % (132/132), done.
  8. remote: Total 157 (delta 21 ), reused 95 (delta 14 ), pack-reused 0
  9. 接收对象中: 100 % (157/157), 659.85 KiB | 7.00 KiB/s, 完成.
  10. 处理 delta 中: 100 % (21/21), 完成.

三、部署单点GlusterFS及heketi服务

由于资源有限,我们直接在当前服务器节点上部署一个GlusterFS实例、一个heketi实例

获取节点信息


  
  1. [root@ 192 -168 -110 -185 ~]# kubectl get nodes --show-labels
  2. NAME STATUS ROLES AGE VERSION LABELS
  3. 192.168 .110 .185 Ready master 3h19m v1 .18 .6 beta.kubernetes. io/arch=arm64,beta.kubernetes. io/ os=linux,kubernetes. io/arch=arm64,kubernetes. io/hostname= 192.168 .110 .185,kubernetes. io/ os=linux,kubernetes. io/role=master

节点直接是名与ip地址相同

创建topology.json文件


  
  1. [root@192- 168- 110- 185 ~] # cd gluster-kubernetes/
  2. [root@192- 168- 110- 185 gluster-kubernetes] # ^C
  3. [root@192- 168- 110- 185 gluster-kubernetes] # cd ..
  4. [root@192- 168- 110- 185 ~] # cd gluster-kubernetes/deploy/
  5. [root@192- 168- 110- 185 deploy] # vim topology.json

  
  1. {
  2. "clusters": [
  3. {
  4. "nodes": [
  5. {
  6. "node": {
  7. "hostnames": {
  8. "manage": [
  9. "192.168.110.185"
  10. ],
  11. "storage": [
  12. "192.168.110.185"
  13. ]
  14. },
  15. "zone": 1
  16. },
  17. "devices": [
  18. "/dev/sda"
  19. ]
  20. }
  21. ]
  22. }
  23. ]
  24. }

注意

node.hostnames.manage数组内填写的是节点名称,不是ip地址!只是我们当前安装的节点是以ip命名

node.hostnames.storage数组内填写的是GlusterFS安装目标节点ip地址

devices数组填写的是目前ip节点上为glusterfs准备的磁盘路径(空硬盘)

在目标节点服务器安装glusterfs-fuse


  
  1. [ root@192-168-110-185 ~] # yum install glusterfs-fuse -y
  2. Last metadata expiration check: 0 :11:06 ago on 2021 年01月22日 星期五 14 时43分51秒.
  3. Dependencies resolved.
  4. ================================================================================
  5. Package Arch Version Repository Size
  6. ================================================================================
  7. Installing:
  8. glusterfs aarch64 7.0 -4. ky10 ks10-adv-os 3.5 M
  9. Installing dependencies:
  10. python3-gluster aarch64 7.0 -4. ky10 ks10-adv-os 15 k
  11. python3-prettytable noarch 0.7 .2 -18. ky10 ks10-adv-os 33 k
  12. rdma-core aarch64 20.1 -6. ky10 ks10-adv-os 494 k
  13. Transaction Summary
  14. ================================================================================
  15. Install 4 Packages
  16. Total download size: 4.0 M
  17. Installed size: 23 M
  18. Downloading Packages:
  19. (1/4): python3-gluster-7.0-4.ky10.aarch64.rpm 90 kB/s | 15 kB 00 :00
  20. (2/4): python3-prettytable-0.7.2-18.ky10.noarch 116 kB/s | 33 kB 00 :00
  21. (3/4): rdma-core-20.1-6.ky10.aarch64.rpm 402 kB/s | 494 kB 00 :01
  22. (4/4): glusterfs-7.0-4.ky10.aarch64.rpm 767 kB/s | 3.5 MB 00 :04
  23. --------------------------------------------------------------------------------
  24. Total 883 kB/s | 4.0 MB 00 :04
  25. Running transaction check
  26. Transaction check succeeded.
  27. Running transaction test
  28. Transaction test succeeded.
  29. Running transaction
  30. Preparing : 1 /1
  31. Installing : rdma-core-20.1-6.ky10.aarch64 1 /4
  32. Running scriptlet: rdma-core-20.1-6.ky10.aarch64 1 /4
  33. Installing : python3-prettytable-0.7.2-18.ky10.noarch 2 /4
  34. Installing : python3-gluster-7.0-4.ky10.aarch64 3 /4
  35. Running scriptlet: glusterfs-7.0-4.ky10.aarch64 4 /4
  36. Installing : glusterfs-7.0-4.ky10.aarch64 4 /4
  37. 警告:/etc/glusterfs/glusterd.vol 已建立为 /etc/glusterfs/glusterd.vol.rpmnew
  38. 警告:/etc/glusterfs/glusterfs-logrotate 已建立为 /etc/glusterfs/glusterfs-logrotate.rpmnew
  39. 警告:/etc/glusterfs/gsyncd.conf 已建立为 /etc/glusterfs/gsyncd.conf.rpmnew
  40. Running scriptlet: glusterfs-7.0-4.ky10.aarch64 4 /4
  41. Verifying : glusterfs-7.0-4.ky10.aarch64 1 /4
  42. Verifying : python3-gluster-7.0-4.ky10.aarch64 2 /4
  43. Verifying : python3-prettytable-0.7.2-18.ky10.noarch 3 /4
  44. Verifying : rdma-core-20.1-6.ky10.aarch64 4 /4
  45. Installed:
  46. glusterfs-7.0-4.ky10.aarch64 python3-gluster-7.0-4.ky10.aarch64
  47. python3-prettytable-0.7.2-18.ky10.noarch rdma-core-20.1-6.ky10.aarch64
  48. Complete!

 

删除掉原项目中的污点容忍配置

由于原项目基于独立部署节点部署GlusterFS,所有GlusterFS上均设置了污点。本文只采用了一个节点安装所有组建,因此需要删除这部分配置

编辑/root/gluster-kubernetes/deploy/kube-templates/glusterfs-daemonset.yaml

去掉末尾的


  
  1. tolerations:
  2. - key: glusterfs
  3. operator: Exists
  4. effect: NoSchedule

保存文件

修改后的完整内容如下


  
  1. ---
  2. kind: DaemonSet
  3. apiVersion: apps/v1
  4. metadata:
  5. name: glusterfs
  6. labels:
  7. glusterfs: daemonset
  8. annotations:
  9. description: GlusterFS DaemonSet
  10. tags: glusterfs
  11. spec:
  12. selector:
  13. matchLabels:
  14. glusterfs: pod
  15. glusterfs-node: pod
  16. template:
  17. metadata:
  18. name: glusterfs
  19. labels:
  20. glusterfs: pod
  21. glusterfs-node: pod
  22. spec:
  23. nodeSelector:
  24. storagenode: glusterfs
  25. hostNetwork: true
  26. containers:
  27. - image: registry.cn-hangzhou.aliyuncs.com/hknaruto/glusterfs-gluster_centos-arm64:latest
  28. imagePullPolicy: Always
  29. name: glusterfs
  30. env:
  31. # alternative for /dev volumeMount to enable access to *all* devices
  32. - name: HOST_DEV_DIR
  33. value: "/mnt/host-dev"
  34. # set GLUSTER_BLOCKD_STATUS_PROBE_ENABLE to "1" so the
  35. # readiness/liveness probe validate gluster-blockd as well
  36. - name: GLUSTER_BLOCKD_STATUS_PROBE_ENABLE
  37. value: "1"
  38. - name: GB_GLFS_LRU_COUNT
  39. value: "15"
  40. - name: TCMU_LOGDIR
  41. value: "/var/log/glusterfs/gluster-block"
  42. resources:
  43. requests:
  44. memory: 100Mi
  45. cpu: 100m
  46. volumeMounts:
  47. - name: glusterfs-heketi
  48. mountPath: "/var/lib/heketi"
  49. - name: glusterfs-run
  50. mountPath: "/run"
  51. - name: glusterfs-lvm
  52. mountPath: "/run/lvm"
  53. - name: glusterfs-etc
  54. mountPath: "/etc/glusterfs"
  55. - name: glusterfs-logs
  56. mountPath: "/var/log/glusterfs"
  57. - name: glusterfs-config
  58. mountPath: "/var/lib/glusterd"
  59. - name: glusterfs-host-dev
  60. mountPath: "/mnt/host-dev"
  61. - name: glusterfs-misc
  62. mountPath: "/var/lib/misc/glusterfsd"
  63. - name: glusterfs-block-sys-class
  64. mountPath: "/sys/class"
  65. - name: glusterfs-block-sys-module
  66. mountPath: "/sys/module"
  67. - name: glusterfs-cgroup
  68. mountPath: "/sys/fs/cgroup"
  69. readOnly: true
  70. - name: glusterfs-ssl
  71. mountPath: "/etc/ssl"
  72. readOnly: true
  73. - name: kernel-modules
  74. mountPath: "/lib/modules"
  75. readOnly: true
  76. securityContext:
  77. capabilities: {}
  78. privileged: true
  79. readinessProbe:
  80. timeoutSeconds: 3
  81. initialDelaySeconds: 40
  82. exec:
  83. command:
  84. - "/bin/bash"
  85. - "-c"
  86. - "if command -v /usr/local/bin/status-probe.sh; then /usr/local/bin/status-probe.sh readiness; else systemctl status glusterd.service; fi"
  87. periodSeconds: 25
  88. successThreshold: 1
  89. failureThreshold: 50
  90. livenessProbe:
  91. timeoutSeconds: 3
  92. initialDelaySeconds: 40
  93. exec:
  94. command:
  95. - "/bin/bash"
  96. - "-c"
  97. - "if command -v /usr/local/bin/status-probe.sh; then /usr/local/bin/status-probe.sh liveness; else systemctl status glusterd.service; fi"
  98. periodSeconds: 25
  99. successThreshold: 1
  100. failureThreshold: 50
  101. volumes:
  102. - name: glusterfs-heketi
  103. hostPath:
  104. path: "/var/lib/heketi"
  105. - name: glusterfs-run
  106. - name: glusterfs-lvm
  107. hostPath:
  108. path: "/run/lvm"
  109. - name: glusterfs-etc
  110. hostPath:
  111. path: "/etc/glusterfs"
  112. - name: glusterfs-logs
  113. hostPath:
  114. path: "/var/log/glusterfs"
  115. - name: glusterfs-config
  116. hostPath:
  117. path: "/var/lib/glusterd"
  118. - name: glusterfs-host-dev
  119. hostPath:
  120. path: "/dev"
  121. - name: glusterfs-misc
  122. hostPath:
  123. path: "/var/lib/misc/glusterfsd"
  124. - name: glusterfs-block-sys-class
  125. hostPath:
  126. path: "/sys/class"
  127. - name: glusterfs-block-sys-module
  128. hostPath:
  129. path: "/sys/module"
  130. - name: glusterfs-cgroup
  131. hostPath:
  132. path: "/sys/fs/cgroup"
  133. - name: glusterfs-ssl
  134. hostPath:
  135. path: "/etc/ssl"
  136. - name: kernel-modules
  137. hostPath:
  138. path: "/lib/modules"

执行部署


  
  1. [root@192-168-110-185 ~] # cd gluster-kubernetes/deploy/
  2. [root@192-168-110-185 deploy] # sh deploy.sh
  3. Using Kubernetes CLI.
  4. Checking status of namespace matching 'default':
  5. default Active 3h34m
  6. Using namespace "default".
  7. Checking for pre-existing resources...
  8. GlusterFS pods ...
  9. Checking status of pods matching ' --selector=glusterfs=pod':
  10. Timed out waiting for pods matching ' --selector=glusterfs=pod'.
  11. not found.
  12. deploy-heketi pod ...
  13. Checking status of pods matching ' --selector=deploy-heketi=pod':
  14. Timed out waiting for pods matching ' --selector=deploy-heketi=pod'.
  15. not found.
  16. heketi pod ...
  17. Checking status of pods matching ' --selector=heketi=pod':
  18. Timed out waiting for pods matching ' --selector=heketi=pod'.
  19. not found.
  20. gluster-s3 pod ...
  21. Checking status of pods matching ' --selector=glusterfs=s3-pod':
  22. Timed out waiting for pods matching ' --selector=glusterfs=s3-pod'.
  23. not found.
  24. Creating initial resources ... /opt/kube/bin/kubectl -n default create -f /root/gluster-kubernetes/deploy/kube-templates/heketi-service-account.yaml 2>& 1
  25. serviceaccount/heketi-service- account created
  26. /opt/kube/ bin/kubectl -n default create clusterrolebinding heketi-sa- view --clusterrole=edit --serviceaccount=default:heketi-service-account 2>&1
  27. clusterrolebinding.rbac.authorization.k8s.io/heketi-sa- view created
  28. /opt/kube/ bin/kubectl -n default label --overwrite clusterrolebinding heketi-sa-view glusterfs=heketi-sa-view heketi=sa-view
  29. clusterrolebinding.rbac.authorization.k8s.io/heketi-sa- view labeled
  30. OK
  31. Marking '192.168.110.185' as a GlusterFS node.
  32. /opt/kube/ bin/kubectl -n default label nodes 192.168 .110 .185 storagenode=glusterfs --overwrite 2>&1
  33. node/ 192.168 .110 .185 not labeled
  34. Deploying GlusterFS pods.
  35. sed -e 's/storagenode\: glusterfs/storagenode\: 'glusterfs '/g' /root/gluster-kubernetes/deploy/kube-templates/glusterfs-daemonset.yaml | /opt/kube/ bin/kubectl -n default create -f - 2>& 1
  36. daemonset.apps/glusterfs created
  37. Waiting for GlusterFS pods to start ...
  38. Checking status of pods matching '--selector=glusterfs=pod':
  39. glusterfs-ndxgd 1/ 1 Running 0 91s
  40. OK
  41. /opt/kube/ bin/kubectl -n default create secret generic heketi-config-secret --from-file=private_key=/dev/null --from-file=./heketi.json --from-file=topology.json=topology.json
  42. secret/heketi-config-secret created
  43. /opt/kube/ bin/kubectl -n default label --overwrite secret heketi-config-secret glusterfs=heketi-config-secret heketi=config-secret
  44. secret/heketi-config-secret labeled
  45. sed -e 's/\${HEKETI_EXECUTOR}/kubernetes/' -e 's#\${HEKETI_FSTAB}#/var/lib/heketi/fstab#' -e 's/\${HEKETI_ADMIN_KEY}/admin/' -e 's/\${HEKETI_USER_KEY}/user/' /root/gluster-kubernetes/deploy/kube-templates/deploy-heketi-deployment.yaml | /opt/kube/ bin/kubectl -n default create -f - 2>& 1
  46. service/deploy-heketi created
  47. deployment.apps/deploy-heketi created
  48. Waiting for deploy-heketi pod to start ...
  49. Checking status of pods matching '--selector=deploy-heketi=pod':
  50. deploy-heketi -59d9fdff68-kpr87 1/ 1 Running 0 12s
  51. OK
  52. Determining heketi service URL ... OK
  53. /opt/kube/ bin/kubectl -n default exec -i deploy-heketi -59d9fdff68-kpr87 -- heketi-cli -s http://localhost:8080 --user admin --secret 'admin' topology load --json=/etc/heketi/topology.json 2>&1
  54. Creating cluster ... ID: 5c8d62cb9f57df17d7a55cc60d5fe5ca
  55. Allowing file volumes on cluster.
  56. Allowing block volumes on cluster.
  57. Creating node 192.168 .110 .185 ... ID: a326a80bfc84137329503683869d044e
  58. Adding device /dev/sda ... OK
  59. heketi topology loaded.
  60. /opt/kube/ bin/kubectl -n default exec -i deploy-heketi -59d9fdff68-kpr87 -- heketi-cli -s http://localhost:8080 --user admin --secret 'admin' setup-openshift-heketi-storage --help --durability=none >/dev/null 2>&1
  61. /opt/kube/ bin/kubectl -n default exec -i deploy-heketi -59d9fdff68-kpr87 -- heketi-cli -s http://localhost:8080 --user admin --secret 'admin' setup-openshift-heketi-storage --listfile=/tmp/heketi-storage.json --durability=none 2>&1
  62. Saving /tmp/heketi-storage.json
  63. /opt/kube/ bin/kubectl -n default exec -i deploy-heketi -59d9fdff68-kpr87 -- cat /tmp/heketi-storage.json | sed 's/heketi\/heketi:dev/registry.cn-hangzhou.aliyuncs.com\/hknaruto\/heketi-arm64:v10.2.0/g' | /opt/kube/bin/kubectl -n default create -f - 2>&1
  64. secret/heketi- storage-secret created
  65. endpoints/heketi- storage-endpoints created
  66. service/heketi- storage-endpoints created
  67. job.batch/heketi- storage-copy-job created
  68. Checking status of pods matching '--selector=job-name=heketi-storage-copy-job':
  69. heketi- storage-copy-job-f85d7 0/ 1 Completed 0 2m24s
  70. /opt/kube/ bin/kubectl -n default label --overwrite svc heketi-storage-endpoints glusterfs=heketi-storage-endpoints heketi=storage-endpoints
  71. service/heketi- storage-endpoints labeled
  72. /opt/kube/ bin/kubectl -n default delete all,service,jobs,deployment,secret --selector="deploy-heketi" 2>&1
  73. pod "deploy-heketi-59d9fdff68-kpr87" deleted
  74. service "deploy-heketi" deleted
  75. deployment.apps "deploy-heketi" deleted
  76. replicaset.apps "deploy-heketi-59d9fdff68" deleted
  77. job.batch "heketi-storage-copy-job" deleted
  78. secret "heketi-storage-secret" deleted
  79. sed -e 's/\${HEKETI_EXECUTOR}/kubernetes/' -e 's#\${HEKETI_FSTAB}#/var/lib/heketi/fstab#' -e 's/\${HEKETI_ADMIN_KEY}/admin/' -e 's/\${HEKETI_USER_KEY}/user/' /root/gluster-kubernetes/deploy/kube-templates/heketi-deployment.yaml | /opt/kube/ bin/kubectl -n default create -f - 2>& 1
  80. service/heketi created
  81. deployment.apps/heketi created
  82. Waiting for heketi pod to start ...
  83. Checking status of pods matching '--selector=heketi=pod':
  84. heketi-bc754bf5d-lm2z2 1/ 1 Running 0 9s
  85. OK
  86. Determining heketi service URL ... OK
  87. heketi is now running and accessible via http:// 172.20 .0 .17: 8080 . To run
  88. administrative commands you can install 'heketi-cli' and use it as follows:
  89. # heketi-cli -s http://172.20.0.17:8080 --user admin --secret '<ADMIN_KEY>' cluster list
  90. You can find it at https://github.com/heketi/heketi/releases . Alternatively,
  91. use it from within the heketi pod:
  92. # /opt/kube/bin/kubectl -n default exec -i heketi-bc754bf5d-lm2z2 -- heketi-cli -s http://localhost:8080 --user admin --secret '<ADMIN_KEY>' cluster list
  93. For dynamic provisioning, create a StorageClass similar to this:
  94. ---
  95. apiVersion: storage.k8s.io/v1
  96. kind: StorageClass
  97. metadata:
  98. name: glusterfs- storage
  99. provisioner: kubernetes.io/glusterfs
  100. parameters:
  101. resturl: "http://172.20.0.17:8080"
  102. restuser: "user"
  103. restuserkey: "user"
  104. Deployment complete!
  105. [root@ 192 -168 -110 -185 deploy] #

创建StorageClass

部署日志最后打印出来的StorageClass只是一个固定的例子,其中的配置需要随当前项目修改

查看heketi服务地址


  
  1. [ root@192-168-110-185 deploy] # kubectl get svc | grep heketi
  2. heketi ClusterIP 10.68 .93 .101 <none> 8080 /TCP 3m
  3. heketi-storage-endpoints ClusterIP 10.68 .28 .192 <none> 1 /TCP 5m30s

得到heketi服务地址:http://10.68.93.101:8080

查看管理员口令

管理员口令是安装是指令参数输入的,位于deploy.sh脚本中


  
  1. [root@192-168-110-185 deploy] # cat deploy.sh
  2. #!/bin/bash
  3. bash ./gk-deploy -v -g --admin-key=admin --user-key=user --single-node -l /tmp/gk-deploy.log -y

得到admin-key:admin

修改storageclass-singlenode.yaml

[root@192-168-110-185 deploy]# vim storageclass-singlenode.yaml

  
  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. name: glusterfs-storage
  5. annotations:
  6. # 由于系统存在其他默认存储,此处设置为false(默认也是false)
  7. storageclass.kubernetes.io/ is-default-class: "false"
  8. provisioner: kubernetes.io/glusterfs
  9. parameters:
  10. resturl: "http://10.68.93.101:8080"
  11. restuser: "admin"
  12. restuserkey: "admin"
  13. volumetype: none

说明:

1. 此处设置的是非默认存储,可以根据实际情况修改。如果设置为默认存储,则k8s所有pvc将到此节点消耗存储空间

2. 单节点glusterfs部署必须指定volumetype:none,否则默认是需要三个节点才能成功绑定pvc

部署StorageClass


  
  1. [root@192-168-110-185 deploy] # kubectl apply -f storageclass-singlenode.yaml
  2. storageclass.storage.k8s.io/glusterfs-storage created
  3. [root@192-168-110-185 deploy] # kubectl get storageclasses
  4. NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
  5. glusterfs-storage kubernetes.io/glusterfs Delete Immediate false 7s

总结

k8s集成heketi、glusterfs后,可以自动根据pvc创建、管理对应的pv,无需人工提前准备。

本文涉及的arm64镜像制作涉及项目

https://github.com/hknarutofk/gluster-containers/tree/master/CentOS

https://github.com/hknarutofk/heketi-docker-arm64

 


转载:https://blog.csdn.net/m0_46573967/article/details/112983717
查看评论
* 以上用户言论只代表其个人观点,不代表本网站的观点或立场