独立部署 GlusterFS+Heketi 实现 Kubernetes / OpenShift 共享存储

1,准备工作

1.1 硬件信息

主机名 IP地址
gfs1 192.168.160.131
gfs2 192.168.160.132
gfs3/heketi 192.168.160.133
  • 20G 的裸盘 /dev/sdb
Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

1.2 环境准备

  • 默认情况下,SELinux不允许从Pod写入远程Gluster服务器,要通过在每个节点上执行SELinux来写入GlusterFS卷
sudo setsebool -P virt_sandbox_use_fusefs on
sudo setsebool -P virt_use_fusefs on

1.3 载入指定的个别模块

modprobe dm_snapshot
modprobe dm_mirror
modprobe dm_thin_pool

2,安装GlusterFS了

yum -y install glusterfs glusterfs-server glusterfs-fuse

2.1 需要为GlusterFS peers打开几个基本TCP端口,以便与OpenShift进行通信并提供存储:

firewall-cmd --add-port=24007-24008/tcp --add-port=49152-49664/tcp --add-port=2222/tcp
firewall-cmd --runtime-to-permanent

2.2 启动GlusterFS的daemon进程了:

systemctl enable glusterd
systemctl start glusterd

3,在GlusterFS的一台虚拟机上安装heketi

yum -y install heketi heketi-client

3.1 启动文件语法:

  • /usr/lib/systemd/system/heketi.service
[Unit]
Description=Heketi Server

[Service]
Type=simple
WorkingDirectory=/var/lib/heketi
EnvironmentFile=-/etc/heketi/heketi.json
User=heketi
ExecStart=/usr/bin/heketi --config=/etc/heketi/heketi.json
Restart=on-failure
StandardOutput=syslog
StandardError=syslog

[Install]
WantedBy=multi-user.target

3.2 重启 heketi

systemctl daemon-reload
systemctl start heketi

3.3 创建密钥并分发

ssh-keygen -f /etc/heketi/heketi_key -t rsa -N ''
chown heketi:heketi /etc/heketi/heketi_key
for i in gfs1 gfs2 gfs3 ;do ssh-copy-id -i /etc/heketi/heketi_key.pub $i ;done

3.4 配置heketi来使用SSH。 编辑/etc/heketi/heketi.json文件

      "executor":"ssh",
      "_sshexec_comment":"SSH username and private key file information",
      "sshexec":{
         "keyfile":"/etc/heketi/heketi_key",
         "user":"root",
         "port":"22",
         "fstab":"/etc/fstab"
      },

3.5 heketi将监听8080端口,添加防火墙规则:

firewall-cmd --add-port=8080/tcp
firewall-cmd --runtime-to-permanent

3.6 重启heketi:

systemctl enable heketi
systemctl restart heketi

3.7 测试 heketi 运行状态:

Hello from Heketi

3.8 配置 GlusterFS 存储池

  • vim /etc/heketi/topology.json
{
  "clusters": [
    {
      "nodes": [
        {
          "node": {
            "hostnames": {
              "manage": [
                "gfs1"
              ],
              "storage": [
                "192.168.160.131"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/sdb"
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "gfs2"
              ],
              "storage": [
                "192.168.160.132"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/sdb"
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "gfs3"
              ],
              "storage": [
                "192.168.160.133"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/sdb"
          ]
        }
      ]
    }
  ]
}

3.9 创建 GlusterFS 存储池

export HEKETI_CLI_SERVER=http://gfs3:8080
heketi-cli --server=http://gfs3:8080 topology load --json=/etc/heketi/topology.json
  • 输出信息
Creating cluster ... ID: d3a3f31dce28e06dbd1099268c4ebe84
    Allowing file volumes on cluster.
    Allowing block volumes on cluster.
    Creating node infra.test.com ... ID: ebfc1e8e2e7668311dc4304bfc1377cb
        Adding device /dev/sdb ... OK
    Creating node node1.test.com ... ID: 0ce162c3b8a65342be1aac96010251ef
        Adding device /dev/sdb ... OK
    Creating node node2.test.com ... ID: 62952de313e71eb5a4bfe5b76224e575
        Adding device /dev/sdb ...  OK

3.10 当前位于 gfs3, 查看集群信息

  • heketi-cli topology info
Cluster Id: d3a3f31dce28e06dbd1099268c4ebe84

    File:  true
    Block: true

    Volumes:

    Nodes:

    Node Id: 0ce162c3b8a65342be1aac96010251ef
    State: online
    Cluster Id: d3a3f31dce28e06dbd1099268c4ebe84
    Zone: 1
    Management Hostnames: node1.test.com
    Storage Hostnames: 192.168.160.132
    Devices:
        Id:d6a5f0aba39a35d3d92f678dc9654eaa   Name:/dev/sdb            State:online    Size (GiB):19      Used (GiB):0       Free (GiB):19
            Bricks:

    Node Id: 62952de313e71eb5a4bfe5b76224e575
    State: online
    Cluster Id: d3a3f31dce28e06dbd1099268c4ebe84
    Zone: 1
    Management Hostnames: node2.test.com
    Storage Hostnames: 192.168.160.133
    Devices:
        Id:dfd697f2215d2a304a44c5af44d352da   Name:/dev/sdb            State:online    Size (GiB):19      Used (GiB):0       Free (GiB):19
            Bricks:

    Node Id: ebfc1e8e2e7668311dc4304bfc1377cb
    State: online
    Cluster Id: d3a3f31dce28e06dbd1099268c4ebe84
    Zone: 1
    Management Hostnames: infra.test.com
    Storage Hostnames: 192.168.160.131
    Devices:
        Id:e06b794b0b9f20608158081fbb5b5102   Name:/dev/sdb            State:online    Size (GiB):19      Used (GiB):0       Free (GiB):19
            Bricks:
  • heketi-cli node list
Id:0ce162c3b8a65342be1aac96010251ef Cluster:d3a3f31dce28e06dbd1099268c4ebe84
Id:62952de313e71eb5a4bfe5b76224e575 Cluster:d3a3f31dce28e06dbd1099268c4ebe84
Id:ebfc1e8e2e7668311dc4304bfc1377cb Cluster:d3a3f31dce28e06dbd1099268c4ebe84
  • gluster peer status
Number of Peers: 2

Hostname: gfs2
Uuid: ae6e998a-92c2-4c63-a7c6-c51a3b7e8fcb
State: Peer in Cluster (Connected)
Other names:
gfs2

Hostname: gfs1
Uuid: c8c46558-a8f2-46db-940d-4b19947cf075
State: Peer in Cluster (Connected)

4,测试

4.1 测试创建volume

  • heketi-cli --json volume create --size 3 --replica 3
{"size":3,"name":"vol_93060cd7698e9e48bd035f26bbfe57af","durability":{"type":"replicate","replicate":{"replica":3},"disperse":{"data":4,"redundancy":2}},"glustervolumeoptions":["",""],"snapshot":{"enable":false,"factor":1},"id":"93060cd7698e9e48bd035f26bbfe57af","cluster":"d3a3f31dce28e06dbd1099268c4ebe84","mount":{"glusterfs":{"hosts":["192.168.160.132","192.168.160.133","192.168.160.131"],"device":"192.168.160.132:vol_93060cd7698e9e48bd035f26bbfe57af","options":{"backup-volfile-servers":"192.168.160.133,192.168.160.131"}}},"blockinfo":{},"bricks":[{"id":"16b8ddb1f2b2d3aa588d4d4a52bb7f6b","path":"/var/lib/heketi/mounts/vg_e06b794b0b9f20608158081fbb5b5102/brick_16b8ddb1f2b2d3aa588d4d4a52bb7f6b/brick","device":"e06b794b0b9f20608158081fbb5b5102","node":"ebfc1e8e2e7668311dc4304bfc1377cb","volume":"93060cd7698e9e48bd035f26bbfe57af","size":3145728},{"id":"9e60ac3b7259c4e8803d4e1f6a235021","path":"/var/lib/heketi/mounts/vg_d6a5f0aba39a35d3d92f678dc9654eaa/brick_9e60ac3b7259c4e8803d4e1f6a235021/brick","device":"d6a5f0aba39a35d3d92f678dc9654eaa","node":"0ce162c3b8a65342be1aac96010251ef","volume":"93060cd7698e9e48bd035f26bbfe57af","size":3145728},{"id":"e3f5ec732d5a8fe4b478af67c9caf85b","path":"/var/lib/heketi/mounts/vg_dfd697f2215d2a304a44c5af44d352da/brick_e3f5ec732d5a8fe4b478af67c9caf85b/brick","device":"dfd697f2215d2a304a44c5af44d352da","node":"62952de313e71eb5a4bfe5b76224e575","volume":"93060cd7698e9e48bd035f26bbfe57af","size":3145728}]}
  • heketi-cli volume list
Id:93060cd7698e9e48bd035f26bbfe57af    Cluster:d3a3f31dce28e06dbd1099268c4ebe84    Name:vol_93060cd7698e9e48bd035f26bbfe57af
  • heketi-cli volume info 93060cd7698e9e48bd035f26bbfe57af
Name: vol_93060cd7698e9e48bd035f26bbfe57af
Size: 3
Volume Id: 93060cd7698e9e48bd035f26bbfe57af
Cluster Id: d3a3f31dce28e06dbd1099268c4ebe84
Mount: 192.168.160.132:vol_93060cd7698e9e48bd035f26bbfe57af
Mount Options: backup-volfile-servers=192.168.160.133,192.168.160.131
Block: false
Free Size: 0
Reserved Size: 0
Block Hosting Restriction: (none)
Block Volumes: []
Durability Type: replicate
Distributed+Replica: 3
  • gluster volume list
vol_93060cd7698e9e48bd035f26bbfe57af
  • gluster volume status
Status of volume: vol_93060cd7698e9e48bd035f26bbfe57af
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 192.168.160.132:/var/lib/heketi/mount
s/vg_d6a5f0aba39a35d3d92f678dc9654eaa/brick
_9e60ac3b7259c4e8803d4e1f6a235021/brick     49153     0          Y       30660
Brick 192.168.160.131:/var/lib/heketi/mount
s/vg_e06b794b0b9f20608158081fbb5b5102/brick
_16b8ddb1f2b2d3aa588d4d4a52bb7f6b/brick     49153     0          Y       21979
Brick 192.168.160.133:/var/lib/heketi/mount
s/vg_dfd697f2215d2a304a44c5af44d352da/brick
_e3f5ec732d5a8fe4b478af67c9caf85b/brick     49152     0          Y       61274
Self-heal Daemon on localhost               N/A       N/A        Y       61295
Self-heal Daemon on apps.test.com           N/A       N/A        Y       22000
Self-heal Daemon on 192.168.160.132         N/A       N/A        Y       30681

Task Status of Volume vol_93060cd7698e9e48bd035f26bbfe57af
------------------------------------------------------------------------------
There are no active volume tasks
  • gluster volume info vol_93060cd7698e9e48bd035f26bbfe57af
Volume Name: vol_93060cd7698e9e48bd035f26bbfe57af
Type: Replicate
Volume ID: ca4a9854-a33c-40ab-86c7-0d0d34004454
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 192.168.160.132:/var/lib/heketi/mounts/vg_d6a5f0aba39a35d3d92f678dc9654eaa/brick_9e60ac3b7259c4e8803d4e1f6a235021/brick
Brick2: 192.168.160.131:/var/lib/heketi/mounts/vg_e06b794b0b9f20608158081fbb5b5102/brick_16b8ddb1f2b2d3aa588d4d4a52bb7f6b/brick
Brick3: 192.168.160.133:/var/lib/heketi/mounts/vg_dfd697f2215d2a304a44c5af44d352da/brick_e3f5ec732d5a8fe4b478af67c9caf85b/brick
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

5,在OpenShift中使用Gluster

5.1 OpenShift 创建 StorageClass YAML文件:

  • 编辑storage-class.yaml,其中resturl为heketi的url,volumetype: replicate:3为副本卷brick数量,建议为3
  • cat storage-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  creationTimestamp: null
  name: gluster-heketi
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: "http://gfs3:8080"
  restauthenabled: "true"
  volumetype: replicate:3
  • 创建 StorageClass
oc create -f storage-class.yaml 
  • 查看 StorageClass
  • oc get sc
NAME             PROVISIONER               AGE
gluster-heketi   kubernetes.io/glusterfs   55m

5.2 OpenShift 创建 PVC YAML文件:

  • cat pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 name: test-pvc
spec:
 accessModes:
  - ReadWriteMany
 resources:
   requests:
        storage: 1Gi
 storageClassName: gluster-heketi
  • 创建 PVC
oc create -f storage-class.yaml 
  • 查看 PV,PVC
  • oc get pv,pvc
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM           STORAGECLASS     REASON    AGE
persistentvolume/pvc-57362c7f-e6c2-11e9-8634-000c299365cc   1Gi        RWX            Delete           Bound     default/test1   gluster-heketi             57m

NAME                          STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS     AGE
persistentvolumeclaim/test-pvc   Bound     pvc-57362c7f-e6c2-11e9-8634-000c299365cc  1Gi        RWX            gluster-heketi   57m
  • 挂载到虚机
mount -t glusterfs 192.168.160.132:vol_b96d0e18cef937dd56a161ae5fa5b9cb /mnt
  • df -h | grep vol_b96d0e18cef937dd56a161ae5fa5b9cb
192.168.160.132:vol_b96d0e18cef937dd56a161ae5fa5b9cb                                   1014M   43M  972M   5% /mnt

6,常用命令

查看集群节点:gluster pool list
查看集群状态(默认不显示当前主机): gluster peer status
查看集群volume :gluster volume list
查看volume 信息:gluster volume info <VOLNAME>
查看volume状态:gluster volume stats <VOLNAME>
强制启动volume:gluster volume start <VOLNAME> force
查看volume需要修复的文件:gluster volume heal <VOLNAME> info
启动完全修复:gluster volume heal <VOLNAME> full
查看修复成功的文件:gluster volume heal <VOLNAME> info healed
查看修复失败的文件:gluster volume heal <VOLNAME> info heal-failed
查看脑裂文件:gluster volume heal <VOLNAME> info split-brain

6.1 其它heketi客户端常用命令

heketi-cli --server=http://localhost:8080 --user=admin --secret=kLd834dadEsfwcv cluster list
heketi-cli --server=http://localhost:8080 --user=admin --secret=kLd834dadEsfwcv cluster info <cluster-id>
heketi-cli --server=http://localhost:8080 --user=admin --secret=kLd834dadEsfwcv node info <node-id>
heketi-cli --server=http://localhost:8080 --user=admin --secret=kLd834dadEsfwcv volume list
heketi-cli --server=http://localhost:8080 --user=admin --secret=kLd834dadEsfwcv volume create --size=1 --replica=2
heketi-cli --server=http://localhost:8080 --user=admin --secret=kLd834dadEsfwcv volume info <volune-id>
heketi-cli --server=http://localhost:8080 --user=admin --secret=kLd834dadEsfwcv volume expand --volume=<volune-id> --expand-size=1
heketi-cli --server=http://localhost:8080 --user=admin --secret=kLd834dadEsfwcv volume delete <volune-id>

6.2 初始化裸盘

  • 每个glusterfs machine上挂一快同样大小的未格式化过磁盘。 如果已经格式化过,使用下面命令重置磁盘(假设磁盘挂在/dev/sdb):
pvcreate --metadatasize=128M --dataalignment=256K /dev/sdb

7, GlusterFS集群故障处理

7.1 volume bricks掉线

  • 查看volume状态:
gluster volume status <volume_name>
  • 当Online列有为N的表示当前bricks不在线,登录brick所在主机查看brick挂载
df -h |grep <BRICKNAME>
  • 若未查看到需要重新mount
cat /etc/fstab |grep <BRICKNAME> |xargs -i mount {}
  • 重新启动掉线的bricks
gluster volume start <VOLNAME> force

7.2 bricks文件不一致修复

  • 查看bricks是否存在文件不一致
gluster volume heal <VOLNAME> info
  • 启动自动修复
gluster volume heal <VOLNAME> full

7.3 bricks脑裂修复

gluster volume heal <VOLNAME> info
  • 若存在Is in split-brain内容则发生脑裂
1)  选择较大的文件作为源修复
gluster volume heal <VOLNAME> split-brain bigger-file <FILE>
2)  选择以最新的mtime作为源的文件
gluster volume heal <VOLNAME> split-brain latest-mtime <FILE>
3)  选择副本中的砖块之一作为特定文件的源
gluster volume heal <VOLNAME> split-brain source-brick <HOSTNAME:BRICKNAME> <FILE>
4)  选择副本的一个brick作为所有文件的源
gluster volume heal <VOLNAME> split-brain source-brick <HOSTNAME:BRICKNAME>

7.4 更换brick

  • 将源bricks数据同步到新的bricks路径
gluster volume replace-brick <VOLNAME> Server1:/home/gfs/r2_0 Server1:/home/gfs/r2_5 start
  • 在数据迁移的过程中,可以查看替换任务是否完成
gluster volume replace-brick <VOLNAME> Server1:/home/gfs/r2_0 Server1:/home/gfs/r2_5 status
  • 在数据迁移结束之后,执行commit命令结束任务,则进行Brick替换。使用volume info命令可以查看到Brick已经被替换
gluster volume replace-brick <VOLNAME> Server1:/home/gfs/r2_0 Server1:/home/gfs/r2_5 commit

8, heketi服务故障处理

  • 报错:
[heketi] ERROR 2018/07/02 09:08:19 /src/github.com/heketi/heketi/apps/glusterfs/app.go:172: Heketi was terminated while performing one or more operations. Server may refuse to start as long as pending operations are present in the db.
heketi服务无法启动
1) 导出heketi的heketi.db文件,文件的路径在heketi.json文件里面
heketi db export --dbfile=/var/lib/heketi/heketi.db --jsonfile=/tmp/heketidb1.json
2) 打开导出的db文件,比如上文的/tmp/heketidb1.json,查找```pendingoperations```选项,找到之后把与它相关的内容删除
3) 将修改后的文件保存,切记要保存为json后缀。然后将db文件再按照如下命令导入
heketi db import --jsonfile=/tmp/succ.json --dbfile=/var/lib/heketi/heketi.db
4) 重启heketi 服务
systemctl start heketi

8,参考文档

原文地址:https://www.cnblogs.com/xiaoqshuo/p/11623779.html

时间: 2024-08-02 01:27:04

独立部署 GlusterFS+Heketi 实现 Kubernetes / OpenShift 共享存储的相关文章

glusterfs + heketi实现kubernetes的共享存储

[toc] 环境 主机名 系统 ip地址 角色 ops-k8s-175 ubuntu16.04 192.168.75.175 k8s-master,glusterfs,heketi ops-k8s-176 ubuntu16.04 192.168.75.176 k8s-node,glusterfs ops-k8s-177 ubuntu16.04 192.168.75.177 k8s-node,glusterfs ops-k8s-178 ubuntu16.04 192.168.175.178 k8s

011.Kubernetes使用共享存储持久化数据

本次实验是以前面的实验为基础,使用的是模拟使用kubernetes集群部署一个企业版的wordpress为实例进行研究学习,主要的过程如下: 1.mysql deployment部署, wordpress deployment部署, wordpress连接mysql时,mysql的 pod ip易变 2.为mysql创建 service,申请固定 service lp 3. wordpress外部可访问,使用 node port类型的 service 4. nodeport类型的 service

20.Kubernetes共享存储

Kubermetes对于有状态的容器应用或者对数据需要持久化的应用,不仅需要将容器内的目录挂载到宿主机的目录或者emptyDir临时存储卷,而且需要更加可靠的存储来保存应用产生的重要数据,以便容器应用在重建之后,仍然可以使用之前的数据.不过,存储资源和计算资源(CPU/内存)的管理方式完全不同.为了能够屏蔽底层存储实现的细节,让用户方便使用,同时能让管理员方便管理, Kubernetes从v1.0版本就引入PersistentVolume和PersistentVolumeClaim两个资源对象来

Kubernetes使用NFS作为共享存储

Kubernetes使用NFS作为共享存储 kubernetes管理的容器是封装的,有时候我们需要将容器运行的日志,放到本地来或是共享存储来,以防止容器宕掉,日志还在还可以分析问题.kubernetes的共享存储方案目前比较流行的一般是三个,分别是:nfs,Glusterfs和ceph. 前面写过一篇kubernetes使用GlusterFS的文章,如果有兴趣也可以去实践下:http://blog.51cto.com/passed/2139299 今天要讲的是kubernetes使用nfs作为共

XenServer部署实录——添加共享存储

XenServer部署实录系列之04添加共享存储 作业环境 XenServer服务器 OS:XenServer 6.2 Hostname:xsr01 IP:192.168.0.241/24 Gateway:IP:192.168.0.1/24 硬件环境:Dell PowerEdge R720 NFS服务器 OS:CentOS 6.4 Hostname:nfs01 IP:192.168.0.204/24 Gateway:IP:192.168.0.1/24 运行软件:nfs,rpcbind 一.关于X

.NET Core部署中你不了解的框架依赖与独立部署

作者:依乐祝 原文地址:https://www.cnblogs.com/yilezhu/p/9703460.html NET Core项目发布的时候你有没有注意到这两个选项呢?有没有纠结过框架依赖与独立部署到底有什么区别呢?如果有的话那么这篇文章可以参考下! 为什么要写这篇文章呢?因为今天同事问我框架依赖与独立部署到底应该选哪个呢?有什么区别.印象中只知道框架依赖发布后文件比独立部署要小很多,然后就是独立部署不占用net core的共享资源,而框架依赖需要与其他net core程序共享net c

[k8s]ubuntu18 + Heketi + Glsuterfs的独立部署

关于ubuntu server 18 上部署glusterfs和heketi的文章网上有很多了,看起来都很顺利,但是我在部署的时候偏偏遇到了很多的问题, 记录一下. 环境:Ubuntu Server 18.04    glusterfs-3.13 heketi-v9 heketi-client-v9 1.使用sudo apt-get install glusterfs-server 安装成功但是启动失败 关于这个问题我也是醉了,使用了ubuntu server 18自带源的3.13版本和官方的3

k8s glusterfs heketi

kubernetes集群上node节点安装glusterfs的服务端集群(DaemonSet方式),并将heketi以deployment的方式部署到kubernetes集群,主要是实现 storageclass Heketi是一个具有resetful接口的glusterfs管理程序, Heketi提供了一个RESTful管理界面,可用于管理GlusterFS卷的生命周期.借助Heketi,Kubernetes 可以动态地配置GlusterFS卷和任何支持的持久性类型.Heketi将自动确定整个

Kubernetes创建挂载共享存储的容器

原文链接:https://www.58jb.com/html/135.html 在上一次的Mysql容器中,已经使用过了配置宿主机目录挂载的方式,这样虽然方便但是不够安全:一般都是把数据存储在远程服务器上如:NFS,GlusterFS,ceph等:一般目前主流的还是使用ceph.GlusterFS; 本次实验使用最简单的方式NFS来配置一个通过挂载共享存储的nginx容器: 两台机器: kubernetes:  10.0.10.135  [Centos7.2] nfs: 10.0.10.31