This may indicate that the storage must be wiped and the GlusterFS nodes must be reset

- 日理万妓 2023-01-06 05:43 250阅读 0赞

heketi 默认至少需要三个节点,可以在执行gk-deploy时加上--single-ndoe参数跳过此报错

操作前删除对应块设备上的lvm数据

  1. [root@200 deploy]# kubectl exec -it -n default glusterfs-drtp7 -- /bin/bash
  2. [root@106 /]# lvm
  3. lvm> lvs
  4. lvm> pvs
  5. PV VG Fmt Attr PSize PFree
  6. /dev/sda vg_fae256e6b16ea3a62ef1ab1341fb23ed lvm2 a-- 99.87g 99.87g
  7. lvm> vgremove vg_fae256e6b16ea3a62ef1ab1341fb23ed
  8. Volume group "vg_fae256e6b16ea3a62ef1ab1341fb23ed" successfully removed
  9. lvm> pvremove /dev/sda
  10. Labels on physical volume "/dev/sda" successfully wiped.

清理历史数据

  1. kubectl delete sa heketi-service-account
  2. kubectl delete clusterrolebinding heketi-sa-view
  3. kubectl delete secret heketi-config-secret
  4. kubectl delete svc deploy-heketi
  5. kubectl delete deploy deploy-heketi

重新部署

  1. ./gk-deploy -g --admin-key=key --user-key=key --single-node

Error: WARNING: This metadata update is NOT backed up.

lvm not found: device not cleared

基于以下dockerfile 重新制作镜像

https://github.com/hknarutofk/gluster-containers

unknown filesystem type ‘glusterfs’

宿主服务器安装glusterfs-fuse

  1. yum install -y glusterfs-fuse

发表评论

表情:
评论列表 (有 0 条评论,250人围观)

还没有评论,来说两句吧...

相关阅读