kubernetes实战(九):k8s集群动态存储管理GlusterFS及容器化GlusterFS扩容

1、准备工作

  所有节点安装GFS客户端

yum install glusterfs glusterfs-fuse -y

  如果不是所有节点要部署GFS管理服务,就在需要部署的节点上打上标签

[[email protected] ~]# kubectl label node k8s-node01 storagenode=glusterfs
node/k8s-node01 labeled
[[email protected]-master01 ~]# kubectl label node k8s-node02 storagenode=glusterfs
node/k8s-node02 labeled
[[email protected]-master01 ~]# kubectl label node k8s-master01 storagenode=glusterfs
node/k8s-master01 labeled

2、创建GFS管理服务容器集群

  本文采用容器化方式部署GFS,公司如有GFS集群可直接使用。

  GFS已Daemonset的方式进行部署,保证每台需要部署GFS管理服务的Node上都运行一个GFS管理服务。

  下载相关文件:

wget https://github.com/heketi/heketi/releases/download/v7.0.0/heketi-client-v7.0.0.linux.amd64.tar.gz

  创建集群:

[[email protected] kubernetes]# kubectl create -f glusterfs-daemonset.json
daemonset.extensions/glusterfs created
[[email protected]-master01 kubernetes]# pwd
/root/heketi-client/share/heketi/kubernetes

  注意1:此时采用的为默认的挂载方式,可使用其他磁盘当做GFS的工作目录。

  注意2:此时创建的namespace为默认的default,按需更改

  查看pods

[[email protected] kubernetes]# kubectl get pods -l glusterfs-node=daemonset
NAME              READY     STATUS    RESTARTS   AGE
glusterfs-5npwn   1/1       Running   0          1m
glusterfs-bd5dx   1/1       Running   0          1m...

3、创建Heketi服务

  Heketi是一个提供RESTful API管理GFS卷的框架,并能够在K8S、OpenShift、OpenStack等云平台上实现动态存储资源供应,支持GFS多集群管理,便于管理员对GFS进行操作。

  创建Heketi的ServiceAccount对象:

[[email protected] kubernetes]# cat heketi-service-account.json
{
  "apiVersion": "v1",
  "kind": "ServiceAccount",
  "metadata": {
    "name": "heketi-service-account"
  }
}
[[email protected]-master01 kubernetes]# kubectl create -f heketi-service-account.json
serviceaccount/heketi-service-account created
[[email protected]-master01 kubernetes]# pwd
/root/heketi-client/share/heketi/kubernetes
[[email protected]-master01 kubernetes]# kubectl get sa
NAME                     SECRETS   AGE
default                  1         13d
heketi-service-account   1         <invalid>

  创建Heketi对应的权限和secret

[[email protected] kubernetes]# kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=default:heketi-service-account
clusterrolebinding.rbac.authorization.k8s.io/heketi-gluster-admin created
[[email protected]-master01 kubernetes]# kubectl create secret generic heketi-config-secret --from-file=./heketi.json
secret/heketi-config-secret created

  初始化部署Heketi

[[email protected] kubernetes]# kubectl create -f heketi-bootstrap.json
secret/heketi-db-backup created
service/heketi created
deployment.extensions/heketi created
[[email protected]-master01 kubernetes]# pwd
/root/heketi-client/share/heketi/kubernetes

  

4、设置GFS集群

[[email protected] heketi-client]# cp bin/heketi-cli /usr/local/bin/
[[email protected]-master01 heketi-client]# pwd
/root/heketi-client

[[email protected]-master01 heketi-client]# heketi-cli -v
heketi-cli v7.0.0

  修改topology-sample,manage为GFS管理服务的Node节点主机名,storage为Node节点IP,devices为Node节点上的裸设备

[[email protected] kubernetes]# cat topology-sample.json
{
  "clusters": [
    {
      "nodes": [
        {
          "node": {
            "hostnames": {
              "manage": [
                "k8s-master01"
              ],
              "storage": [
                "192.168.20.20"
              ]
            },
            "zone": 1
          },
          "devices": [
            {
              "name": "/dev/sdc",
              "destroydata": false
            }
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "k8s-node01"
              ],
              "storage": [
                "192.168.20.30"
              ]
            },
            "zone": 1
          },
          "devices": [
            {
              "name": "/dev/sdb",
              "destroydata": false
            }
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "k8s-node02"
              ],
              "storage": [
                "192.168.20.31"
              ]
            },
            "zone": 1
          },
          "devices": [
            {
              "name": "/dev/sdb",
              "destroydata": false
            }
          ]
        }
      ]
    }
  ]
}

  查看当前pod的ClusterIP

[[email protected] kubernetes]# kubectl get svc | grep heketi
deploy-heketi   ClusterIP   10.110.217.153   <none>        8080/TCP   26m
[[email protected]-master01 kubernetes]# export HEKETI_CLI_SERVER=http://10.110.217.153:8080

  创建GFS集群

[[email protected] kubernetes]# heketi-cli topology load --json=topology-sample.json
Creating cluster ... ID: a058723afae149618337299c84a1eaed
    Allowing file volumes on cluster.
    Allowing block volumes on cluster.
    Creating node k8s-master01 ... ID: 929909065ceedb59c1b9c235fc3298ec
        Adding device /dev/sdc ... OK
    Creating node k8s-node01 ... ID: 37409d82b9ef27f73ccc847853eec429
        Adding device /dev/sdb ... OK
    Creating node k8s-node02 ... ID: e3ab676be27945749bba90efb34f2eb9
        Adding device /dev/sdb ... OK

  创建heketi持久化卷

yum install device-mapper* -y
[[email protected] kubernetes]# heketi-cli setup-openshift-heketi-storage
Saving heketi-storage.json
[[email protected]-master01 kubernetes]# ls
glusterfs-daemonset.json  heketi.json                  heketi-storage.json
heketi-bootstrap.json     heketi-service-account.json  README.md
heketi-deployment.json    heketi-start.sh              topology-sample.json
[[email protected]-master01 kubernetes]# kubectl create -f heketi-storage.json
secret/heketi-storage-secret created
endpoints/heketi-storage-endpoints created
service/heketi-storage-endpoints created
job.batch/heketi-storage-copy-job created

  如果出现如下报错:

[[email protected] kubernetes]# heketi-cli setup-openshift-heketi-storage
Error: /usr/sbin/modprobe failed: 1
  thin: Required device-mapper target(s) not detected in your kernel.
  Run `lvcreate --help‘ for more information.

  解决办法:所有节点执行modprobe dm_thin_pool

  删除中间产物

[[email protected] kubernetes]# kubectl delete all,service,jobs,deployment,secret --selector="deploy-heketi"
pod "deploy-heketi-59f8dbc97f-5rf6s" deleted
service "deploy-heketi" deleted
service "heketi" deleted
deployment.apps "deploy-heketi" deleted
replicaset.apps "deploy-heketi-59f8dbc97f" deleted
job.batch "heketi-storage-copy-job" deleted
secret "heketi-storage-secret" deleted

  创建持久化Heketi,持久化方式也可以选用其他方法。

[[email protected] kubernetes]# kubectl create -f heketi-deployment.json
service/heketi created
deployment.extensions/heketi created

  待pod起来后,部署完成

[[email protected]master01 kubernetes]# kubectl get po
NAME                               READY     STATUS      RESTARTS   AGE
glusterfs-5npwn                    1/1       Running     0          3h
glusterfs-8zfzq                    1/1       Running     0          3h
glusterfs-bd5dx                    1/1       Running     0          3h
heketi-5cb5f55d9f-5mtqt            1/1       Running     0          2m

  查看最新部署的持久化Heketi的svc,并更改HEKETI_CLI_SERVER的值

[[email protected]master01 kubernetes]# kubectl get svc
NAME                       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
heketi                     ClusterIP   10.111.95.240   <none>        8080/TCP   12h
heketi-storage-endpoints   ClusterIP   10.99.28.153    <none>        1/TCP      12h
kubernetes                 ClusterIP   10.96.0.1       <none>        443/TCP    14d
[[email protected]-master01 kubernetes]# export HEKETI_CLI_SERVER=http://10.111.95.240:8080
[[email protected] kubernetes]# curl http://10.111.95.240:8080/hello
Hello from Heketi

  查看GFS信息

Hello from Heketi[[email protected] kubernetes]# heketi-cli topology info

Cluster Id: 5dec5676c731498c2bdf996e110a3e5e

    File:  true
    Block: true

    Volumes:

    Name: heketidbstorage
    Size: 2
    Id: 828dc2dfaa00b7213e831b91c6213ae4
    Cluster Id: 5dec5676c731498c2bdf996e110a3e5e
    Mount: 192.168.20.31:heketidbstorage
    Mount Options: backup-volfile-servers=192.168.20.30,192.168.20.20
    Durability Type: replicate
    Replica: 3
    Snapshot: Disabled

        Bricks:
            Id: 16b7270d7db1b3cfe9656b64c2a3916c
            Path: /var/lib/heketi/mounts/vg_04290ec786dc7752a469b66f5e94458f/brick_16b7270d7db1b3cfe9656b64c2a3916c/brick
            Size (GiB): 2
            Node: fb181b0cef571e9af7d84d2ecf534585
            Device: 04290ec786dc7752a469b66f5e94458f

            Id: 828da093d9d78a2b1c382b13cc4da4a1
            Path: /var/lib/heketi/mounts/vg_80b61df999fcac26ebca6e28c4da8e61/brick_828da093d9d78a2b1c382b13cc4da4a1/brick
            Size (GiB): 2
            Node: d38819746cab7d567ba5f5f4fea45d91
            Device: 80b61df999fcac26ebca6e28c4da8e61

            Id: e8ef0e68ccc3a0416f73bc111cffee61
            Path: /var/lib/heketi/mounts/vg_82af8e5f2fb2e1396f7c9e9f7698a178/brick_e8ef0e68ccc3a0416f73bc111cffee61/brick
            Size (GiB): 2
            Node: 0f00835397868d3591f45432e432ba38
            Device: 82af8e5f2fb2e1396f7c9e9f7698a178

    Nodes:

    Node Id: 0f00835397868d3591f45432e432ba38
    State: online
    Cluster Id: 5dec5676c731498c2bdf996e110a3e5e
    Zone: 1
    Management Hostnames: k8s-node02
    Storage Hostnames: 192.168.20.31
    Devices:
        Id:82af8e5f2fb2e1396f7c9e9f7698a178   Name:/dev/sdb            State:online    Size (GiB):39      Used (GiB):22      Free (GiB):17
            Bricks:
                Id:e8ef0e68ccc3a0416f73bc111cffee61   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_82af8e5f2fb2e1396f7c9e9f7698a178/brick_e8ef0e68ccc3a0416f73bc111cffee61/brick

    Node Id: d38819746cab7d567ba5f5f4fea45d91
    State: online
    Cluster Id: 5dec5676c731498c2bdf996e110a3e5e
    Zone: 1
    Management Hostnames: k8s-node01
    Storage Hostnames: 192.168.20.30
    Devices:
        Id:80b61df999fcac26ebca6e28c4da8e61   Name:/dev/sdb            State:online    Size (GiB):39      Used (GiB):22      Free (GiB):17
            Bricks:
                Id:828da093d9d78a2b1c382b13cc4da4a1   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_80b61df999fcac26ebca6e28c4da8e61/brick_828da093d9d78a2b1c382b13cc4da4a1/brick

    Node Id: fb181b0cef571e9af7d84d2ecf534585
    State: online
    Cluster Id: 5dec5676c731498c2bdf996e110a3e5e
    Zone: 1
    Management Hostnames: k8s-master01
    Storage Hostnames: 192.168.20.20
    Devices:
        Id:04290ec786dc7752a469b66f5e94458f   Name:/dev/sdc            State:online    Size (GiB):39      Used (GiB):22      Free (GiB):17
            Bricks:
                Id:16b7270d7db1b3cfe9656b64c2a3916c   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_04290ec786dc7752a469b66f5e94458f/brick_16b7270d7db1b3cfe9656b64c2a3916c/brick

5、定义StorageClass

[[email protected] gfs]# cat storageclass-gfs-heketi.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gluster-heketi
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: "http://10.111.95.240:8080"
  restauthenabled: "false"
[[email protected]-master01 gfs]# ku
[[email protected]-master01 gfs]# kubectl create -f storageclass-gfs-heketi.yaml
storageclass.storage.k8s.io/gluster-heketi created

  Provisioner参数须设置为"kubernetes.io/glusterfs"

  resturl地址为API Server所在主机可以访问到的Heketi服务的某个地址

6、定义PVC及测试Pod

[[email protected] gfs]# kubectl create -f pod-use-pvc.yaml
pod/pod-use-pvc created
persistentvolumeclaim/pvc-gluster-heketi created
[[email protected]-master01 gfs]# cat pod-use-pvc.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod-use-pvc
spec:
  containers:
  - name: pod-use-pvc
    image: busybox
    command:
    - sleep
    - "3600"
    volumeMounts:
    - name: gluster-volume
      mountPath: "/pv-data"
      readOnly: false
  volumes:
  - name: gluster-volume
    persistentVolumeClaim:
      claimName: pvc-gluster-heketi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-gluster-heketi
spec:
  accessModes: [ "ReadWriteOnce" ]
  storageClassName: "gluster-heketi"
  resources:
    requests:
      storage: 1Gi

  PVC定义一旦生成,系统便触发Heketi进行相应的操作,主要为在GlusterFS集群上创建brick,再创建并启动一个volume

  创建的pv及pvc如下

[[email protected] gfs]# kubectl get pv,pvc  | grep gluster

persistentvolume/pvc-4a8033e8-e7f7-11e8-9a09-000c293bfe27   1Gi        RWO            Delete           Bound       default/pvc-gluster-heketi                                gluster-heketi                          5m
persistentvolumeclaim/pvc-gluster-heketi   Bound     pvc-4a8033e8-e7f7-11e8-9a09-000c293bfe27   1Gi        RWO            gluster-heketi   5m

7、测试数据

  进入到pod并创建文件

[[email protected] /]# kubectl exec -ti pod-use-pvc -- /bin/sh
/ # cd /pv-data/
/pv-data # mkdir {1..10}
/pv-data # ls
{1..10}

  宿主机挂载测试

# 查看volume
[[email protected]-master01 /]# heketi-cli topology info

Cluster Id: 5dec5676c731498c2bdf996e110a3e5e

    File:  true
    Block: true

    Volumes:

    Name: vol_56d636b452d31a9d4cb523d752ad0891
    Size: 1
    Id: 56d636b452d31a9d4cb523d752ad0891
    Cluster Id: 5dec5676c731498c2bdf996e110a3e5e
    Mount: 192.168.20.31:vol_56d636b452d31a9d4cb523d752ad0891
    Mount Options: backup-volfile-servers=192.168.20.30,192.168.20.20
    Durability Type: replicate
    Replica: 3
    Snapshot: Enabled
...
...# 或者使用volume list查看

 [[email protected] mnt]# heketi-cli volume list
  Id:56d636b452d31a9d4cb523d752ad0891 Cluster:5dec5676c731498c2bdf996e110a3e5e Name:vol_56d636b452d31a9d4cb523d752ad0891
  Id:828dc2dfaa00b7213e831b91c6213ae4 Cluster:5dec5676c731498c2bdf996e110a3e5e Name:heketidbstorage
  [[email protected] mnt]#

  vol_56d636b452d31a9d4cb523d752ad0891为volume Name,Mount: 192.168.20.31:vol_56d636b452d31a9d4cb523d752ad0891,挂载方式。

  挂载查看数据

[[email protected] /]# mount -t glusterfs 192.168.20.31:vol_56d636b452d31a9d4cb523d752ad0891  /mnt/
[[email protected]-master01 /]# cd /mnt/
[[email protected]-master01 mnt]# ls
{1..10}

8、测试Deployments

[[email protected] gfs]# cat nginx-gluster.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-gfs
spec:
  replicas: 2
  template:
    metadata:
      labels:
        name: nginx
    spec:
      containers:
        - name: nginx
          image: nginx
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80
          volumeMounts:
            - name: nginx-gfs-html
              mountPath: "/usr/share/nginx/html"
            - name: nginx-gfs-conf
              mountPath: "/etc/nginx/conf.d"
      volumes:
      - name: nginx-gfs-html
        persistentVolumeClaim:
          claimName: glusterfs-nginx-html
      - name: nginx-gfs-conf
        persistentVolumeClaim:
          claimName: glusterfs-nginx-conf
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: glusterfs-nginx-html
spec:
  accessModes: [ "ReadWriteMany" ]
  storageClassName: "gluster-heketi"
  resources:
    requests:
      storage: 0.5Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: glusterfs-nginx-conf
spec:
  accessModes: [ "ReadWriteMany" ]
  storageClassName: "gluster-heketi"
  resources:
    requests:
      storage: 0.1Gi
[[email protected]-master01 gfs]# kubectl get po,pvc,pv | grep nginx
pod/nginx-gfs-77c758ccc-2hwl6          1/1       Running             0          4m
pod/nginx-gfs-77c758ccc-kxzfz          0/1       ContainerCreating   0          3m

persistentvolumeclaim/glusterfs-nginx-conf   Bound     pvc-f40c5d4b-e800-11e8-8a89-000c293ad492   1Gi        RWX            gluster-heketi   2m
persistentvolumeclaim/glusterfs-nginx-html   Bound     pvc-f40914f8-e800-11e8-8a89-000c293ad492   1Gi        RWX            gluster-heketi   2m

persistentvolume/pvc-f40914f8-e800-11e8-8a89-000c293ad492   1Gi        RWX            Delete           Bound       default/glusterfs-nginx-html                              gluster-heketi                          4m
persistentvolume/pvc-f40c5d4b-e800-11e8-8a89-000c293ad492   1Gi        RWX            Delete           Bound       default/glusterfs-nginx-conf                              gluster-heketi                          4m

  查看挂载情况

[[email protected] gfs]# kubectl exec -ti nginx-gfs-77c758ccc-2hwl6 -- df -Th
Filesystem                                         Type            Size  Used Avail Use% Mounted on
overlay                                            overlay          86G  6.6G   80G   8% /
tmpfs                                              tmpfs           7.8G     0  7.8G   0% /dev
tmpfs                                              tmpfs           7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/mapper/centos-root                            xfs              86G  6.6G   80G   8% /etc/hosts
shm                                                tmpfs            64M     0   64M   0% /dev/shm
192.168.20.20:vol_b9c68075c6f20438b46db892d15ed45a fuse.glusterfs 1014M   43M  972M   5% /etc/nginx/conf.d
192.168.20.20:vol_32146a51be9f980c14bc86c34f67ebd5 fuse.glusterfs 1014M   43M  972M   5% /usr/share/nginx/html
tmpfs                                              tmpfs           7.8G   12K  7.8G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs

  挂载并创建index.html

[[email protected] gfs]# mount -t glusterfs 192.168.20.20:vol_32146a51be9f980c14bc86c34f67ebd5 /mnt/
[[email protected]-master01 gfs]# cd /mnt/
[[email protected]-master01 mnt]# ls
[[email protected]-master01 mnt]# echo "test" > index.html
[[email protected]-master01 mnt]# kubectl exec -ti nginx-gfs-77c758ccc-2hwl6 -- cat /usr/share/nginx/html/index.html
test

  扩容nginx

[[email protected] ~]# kubectl get deploy
NAME        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
heketi      1         1         1            1           14h
nginx-gfs   2         2         2            2           23m
[[email protected]-master01 ~]# kubectl scale deploy nginx-gfs --replicas 3
deployment.extensions/nginx-gfs scaled
[[email protected]-master01 ~]# kubectl get po
NAME                               READY     STATUS      RESTARTS   AGE
glusterfs-5npwn                    1/1       Running     0          18h
glusterfs-8zfzq                    1/1       Running     0          17h
glusterfs-bd5dx                    1/1       Running     0          18h
heketi-5cb5f55d9f-5mtqt            1/1       Running     0          14h
nginx-gfs-77c758ccc-2hwl6          1/1       Running     0          11m
nginx-gfs-77c758ccc-6fphl          1/1       Running     0          8m
nginx-gfs-77c758ccc-kxzfz          1/1       Running     0          10m

  查看文件内容

  [[email protected] ~]# kubectl exec -ti nginx-gfs-77c758ccc-6fphl -- cat /usr/share/nginx/html/index.html
  test

9、扩容GlusterFS

9.1添加磁盘至已存在的node节点

  基于上述节点,假设在k8s-node02上增加磁盘

  查看k8s-node02部署的pod name及IP

[[email protected] ~]# kubectl get po -o wide -l glusterfs-node
NAME              READY     STATUS    RESTARTS   AGE       IP              NODE
glusterfs-5npwn   1/1       Running   0          20h       192.168.20.31   k8s-node02
glusterfs-8zfzq   1/1       Running   0          20h       192.168.20.20   k8s-master01
glusterfs-bd5dx   1/1       Running   0          20h       192.168.20.30   k8s-node01

  在node02上确认新添加的盘符

Disk /dev/sdc: 42.9 GB, 42949672960 bytes, 83886080 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

  使用heketi-cli查看cluster ID和所有node ID

[[email protected] ~]# heketi-cli cluster info
Error: Cluster id missing
[[email protected]-master01 ~]# heketi-cli cluster list
Clusters:
Id:5dec5676c731498c2bdf996e110a3e5e [file][block]
[[email protected]-master01 ~]# heketi-cli cluster info 5dec5676c731498c2bdf996e110a3e5e
Cluster id: 5dec5676c731498c2bdf996e110a3e5e
Nodes:
0f00835397868d3591f45432e432ba38
d38819746cab7d567ba5f5f4fea45d91
fb181b0cef571e9af7d84d2ecf534585
Volumes:
32146a51be9f980c14bc86c34f67ebd5
56d636b452d31a9d4cb523d752ad0891
828dc2dfaa00b7213e831b91c6213ae4
b9c68075c6f20438b46db892d15ed45a
Block: true

File: true

  找到对应的k8s-node02的node ID

[[email protected] ~]# heketi-cli node info 0f00835397868d3591f45432e432ba38
Node Id: 0f00835397868d3591f45432e432ba38
State: online
Cluster Id: 5dec5676c731498c2bdf996e110a3e5e
Zone: 1
Management Hostname: k8s-node02
Storage Hostname: 192.168.20.31
Devices:
Id:82af8e5f2fb2e1396f7c9e9f7698a178   Name:/dev/sdb            State:online    Size (GiB):39      Used (GiB):25      Free (GiB):14      Bricks:4

  添加磁盘至GFS集群的node02

[[email protected] ~]# heketi-cli device add --name=/dev/sdc --node=0f00835397868d3591f45432e432ba38
Device added successfully

  查看结果

[[email protected] ~]# heketi-cli node info 0f00835397868d3591f45432e432ba38
Node Id: 0f00835397868d3591f45432e432ba38
State: online
Cluster Id: 5dec5676c731498c2bdf996e110a3e5e
Zone: 1
Management Hostname: k8s-node02
Storage Hostname: 192.168.20.31
Devices:
Id:5539e74bc2955e7c70b3a20e72c04615   Name:/dev/sdc            State:online    Size (GiB):39      Used (GiB):0       Free (GiB):39      Bricks:0
Id:82af8e5f2fb2e1396f7c9e9f7698a178   Name:/dev/sdb            State:online    Size (GiB):39      Used (GiB):25      Free (GiB):14      Bricks:4

9.2 添加新节点

  假设将k8s-master03,IP为192.168.20.22的加入glusterfs集群,并将该节点的/dev/sdc加入到集群

  加标签,之后会自动创建pod

[[email protected] kubernetes]# kubectl label node k8s-master03 storagenode=glusterfs
node/k8s-master03 labeled
[[email protected]-master01 kubernetes]# kubectl  get pod -owide -l glusterfs-node
NAME              READY     STATUS              RESTARTS   AGE       IP              NODE
glusterfs-5npwn   1/1       Running             0          21h       192.168.20.31   k8s-node02
glusterfs-8zfzq   1/1       Running             0          21h       192.168.20.20   k8s-master01
glusterfs-96w74   0/1       ContainerCreating   0          2m        192.168.20.22   k8s-master03
glusterfs-bd5dx   1/1       Running             0          21h       192.168.20.30   k8s-node01

  在任意节点执行peer probe

[[email protected] kubernetes]# kubectl exec -ti glusterfs-5npwn -- gluster peer probe 192.168.20.22
peer probe: success.

  将新节点加入到glusterfs集群中

[[email protected] kubernetes]# heketi-cli cluster list
Clusters:
Id:5dec5676c731498c2bdf996e110a3e5e [file][block]
[[email protected]-master01 kubernetes]# heketi-cli node add --zone=1 --cluster=5dec5676c731498c2bdf996e110a3e5e --management-host-name=k8s-master03 --storage-host-name=192.168.20.22
Node information:
Id: 150bc8c458a70310c6137e840619758c
State: online
Cluster Id: 5dec5676c731498c2bdf996e110a3e5e
Zone: 1
Management Hostname k8s-master03
Storage Hostname 192.168.20.22

  将新节点的磁盘加入到集群中

[[email protected] kubernetes]# heketi-cli device add --name=/dev/sdc --node=150bc8c458a70310c6137e840619758c
Device added successfully

  验证

[[email protected] kubernetes]# heketi-cli node list
Id:0f00835397868d3591f45432e432ba38    Cluster:5dec5676c731498c2bdf996e110a3e5e
Id:150bc8c458a70310c6137e840619758c    Cluster:5dec5676c731498c2bdf996e110a3e5e
Id:d38819746cab7d567ba5f5f4fea45d91    Cluster:5dec5676c731498c2bdf996e110a3e5e
Id:fb181b0cef571e9af7d84d2ecf534585    Cluster:5dec5676c731498c2bdf996e110a3e5e
[[email protected]-master01 kubernetes]# heketi-cli node info 150bc8c458a70310c6137e840619758c
Node Id: 150bc8c458a70310c6137e840619758c
State: online
Cluster Id: 5dec5676c731498c2bdf996e110a3e5e
Zone: 1
Management Hostname: k8s-master03
Storage Hostname: 192.168.20.22
Devices:
Id:2d5210c19858fb7ea3f805e6f582ecce   Name:/dev/sdc            State:online    Size (GiB):39      Used (GiB):0       Free (GiB):39      Bricks:0 

原文地址:https://www.cnblogs.com/dukuan/p/9954094.html

时间: 2024-08-04 10:14:21

kubernetes实战(九):k8s集群动态存储管理GlusterFS及容器化GlusterFS扩容的相关文章

Kubernetes实战 高可用集群搭建,配置,运维与应用

1-1 K8S导学 1-2 搭建K8S集群步骤和要点介绍 1-3 搭建三节点Ubuntu环境 1-4 安装容器引擎 1-5 下载Kubeadm.node组件和命令行工具 1-6 向集群中加入worker节点 1-7 安装dashboard和heapster并验证集群安装结束 1-8 小结 2-1 探索K8S集群路线 2-2 Kubeadm init流程揭秘 2-3 Kubeadm join 揭秘 2-4 Kubernetes核心组件详解 2-5 Kubectl详解 3-1 kubernetes集

【K8S学习笔记】Part2:获取K8S集群中运行的所有容器镜像

本文将介绍如何使用kubectl列举K8S集群中运行的Pod内的容器镜像. 注意:本文针对K8S的版本号为v1.9,其他版本可能会有少许不同. 0x00 准备工作 需要有一个K8S集群,并且配置好了kubectl命令行工具来与集群通信.如果未准备好集群,那么你可以使用Minikube创建一个K8S集群,或者你也可以使用下面K8S环境二者之一: Katacoda Play with Kubernetes 如果需要查看K8S版本信息,可以输入指令kubectl version. 在本练习中,我们将使

k8s集群启动了上万个容器(一个pod里放上百个容器,起百个pod就模拟出上万个容器)服务器超时,无法操作的解决办法

问题说明: 一个POD里放了百个容器,然后让K8S集群部署上百个POD,得到可运行上万个容器的实验目的. 实验环境:3台DELL裸机服务器,16核+64G,硬盘容量忽略吧,上T了,肯定够. 1.一开始运行5000多个容器的时候(也就50个POD),集群部署后,10几分钟就起来了,感觉还不错. 2.增加压力,把50个POD增加到100个POD,感觉也不会很长时间,都等到下班后又过了半个小时,还是没有起来,集群链接缓慢,使用kubect里面的命令,好久都出不来信息,UI界面显示服务器超时. 心想,完

K8S集群安装 之 安装Docker容器的私有仓库

一.在运维主机上安装私有仓库步骤 cd /opt opt]# mkdir src opt]# cd src/ # 可以去这个地址下载,也可以直接用我用的软件包 https://github.com/goharbor/harbor/releases/tag/v1.8.3 src]# tar xf harbor-offline-installer-v1.8.3.tgz -C /opt/ src]# cd /opt/ opt]# mv harbor/ harbor-v1.8.3 opt]# ln -s

使用k8s集群内解析服务

curl https://saas-pay-gray.XXX.cn/ping 外网 curl https://vpc-saas-pay-gray.XXX.cn/ping slb,即阿里云内网 1/3包是通的 当时设计slb(vpc)的时候考虑到具体功能,除了日常使用使用内网拉取容器镜像的功能之外,是否还有其他? 由于公司架构使用了k8s集群,k8s内部有做了一套完整的解析:即不经过slb,直接通过k8s集群内DNS解析调用服务 servicename curl saas-pay-gray/pin

k8s 集群容器中集成arthas、netstat即时诊断分析工具

背景:k8s 集群中,java应用容器中添加即时工具分析诊断arthas.netstat 1.预先下载好arthas-packaging-3.1.1-bin.zip文件,在Dockerfile同目录下,并且重命名为arthas.zipwget http://repo1.maven.org/maven2/com/taobao/arthas/arthas-packaging/3.1.1/arthas-packaging-3.1.1-bin.zip mv arthas-packaging-3.1.1-

kubernetes实战-交付dubbo服务到k8s集群(六)使用blue ocean流水线构建dubbo-consumer服务

我们这里的dubbo-consumer是dubbo-demo-service的消费者: 我们之前已经在jenkins配置好了流水线,只需要填写参数就行了. 由于dubbo-consumer用的gitee的私有仓库,需要添加公钥,这里大家可以自己找个client服务来做实验. 下面是我们通过jenkins构建的镜像,已经上传到我们的harbor私有仓库当中了: 这里我们构建了两次,构建了两个镜像,11bb9cd这个用来做模拟生产发版更新实验. 准备资源配置清单: 1.dp.yaml  红色部分根据

Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列之部署master/node节点组件(四)

0.前言 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 1.部署master组件 master 服务器的组件有:kube-apiserver.kube-controller-manager.kube-scheduler 因此需要下载k8s master,下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGE

Kubernetes学习之路(一)之Kubeadm部署K8S集群

一个星期会超过多少阅读量呢??发布一篇,看看是否重新在51上写学习博文,有老铁支持嘛?? 使用kubeadm部署集群 节点名称 ip地址 部署说明 Pod 网段 Service网段 系统说明 k8s-master 192.168.56.11 docker.kubeadm.kubectl.kubelet 10.244.0.0/16 10.96.0.0/12 Centos 7.4 k8s-node01 192.168.56.12 docker.kubeadm.kubelet 10.244.0.0/1