k8s使用glusterfs做存储

一、安装glusterfs

https://www.cnblogs.com/zhangb8042/p/7801181.html

环境介绍;

centos 7

[[email protected] ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.31.250.144 k8s-m
172.31.250.145 node

配置好信任池
[[email protected]-m ~]# gluster peer status
Number of Peers: 1

Hostname: node
Uuid: 550bc83e-e15b-40da-9f63-b468d6c7bdb9
State: Peer in Cluster (Connected)

2、创建目录
mkdir /data

3、创建glusterfs的复制卷
[[email protected] yum.repos.d]# gluster volume create gv0 replica 2 k8s-m:/data   node:/data  force
volume create: gv0: success: please start the volume to access data

4、启动卷
[[email protected] yum.repos.d]# gluster volume start gv0
volume start: gv0: success

5、查看
[[email protected]-m ~]# gluster volume status gv0
Status of volume: gv0
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick k8s-m:/data                           49152     0          Y       7925
Brick node:/data                            49152     0          Y       18592
Self-heal Daemon on localhost               N/A       N/A        Y       7948
Self-heal Daemon on node                    N/A       N/A        Y       18615

Task Status of Volume gv0
------------------------------------------------------------------------------
There are no active volume tasks

二、k8s配置

1、配置 endpoints

[[email protected] ~]# cat glusterfs-endpoints.json
{
  "kind": "Endpoints",
  "apiVersion": "v1",
  "metadata": {
    "name": "glusterfs-cluster"
  },
  "subsets": [
    {
      "addresses": [
        {
          "ip": "172.31.250.144"
        }
      ],
      "ports": [
        {
          "port": 1000
        }
      ]
    },
    {
      "addresses": [
        {
          "ip": "172.31.250.145"
        }
      ],
      "ports": [
        {
          "port": 1000
        }
      ]
    }
  ]
}

#导入
kubectl apply -f glusterfs-endpoints.json
#查看
[[email protected] ~]# kubectl get ep
NAME                ENDPOINTS                                 AGE
glusterfs-cluster   172.31.250.144:1000,172.31.250.145:1000   17m
kubernetes          172.31.250.144:6443                       24m

  

2、配置 service

[[email protected] ~]# cat glusterfs-service.json
{
  "kind": "Service",
  "apiVersion": "v1",
  "metadata": {
    "name": "glusterfs-cluster"
  },
  "spec": {
    "ports": [
      {"port": 1000}
    ]
  }
}

#导入
kubectl apply -f glusterfs-service.json
#查看
[[email protected]-m ~]# kubectl  get svc
NAME                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
glusterfs-cluster   ClusterIP   10.105.177.109   <none>        1000/TCP   17m
kubernetes          ClusterIP   10.96.0.1        <none>        443/TCP    24m

3、创建测试 pod

[[email protected] ~]# cat glusterfs-pod.json
{
    "apiVersion": "v1",
    "kind": "Pod",
    "metadata": {
        "name": "glusterfs"
    },
    "spec": {
        "containers": [
            {
                "name": "glusterfs",
                "image": "nginx",
                "volumeMounts": [
                    {
                        "mountPath": "/mnt/glusterfs",
                        "name": "glusterfsvol"
                    }
                ]
            }
        ],
        "volumes": [
            {
                "name": "glusterfsvol",
                "glusterfs": {
                    "endpoints": "glusterfs-cluster",
                    "path": "gv0", #之前创建的glusterfs卷名
                    "readOnly": true
                }
            }
        ]
    }
}

#导入kubectl apply -f  glusterfs-pod.json #查看kubectl  get pod

4、创建pv

[[email protected] ~]# cat glusterfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: gluster-dev-volume
spec:
  capacity:
    storage: 8Gi
  accessModes:
    - ReadWriteMany
  glusterfs:
    endpoints: "glusterfs-cluster"
    path: "gv0"
    readOnly: false

#导入
kubectl apply  -f  glusterfs-pv.yaml
#查看
kubectl  get pv 

5、创建pvc

[[email protected] ~]# cat glusterfs-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: glusterfs-nginx
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 8Gi

#导入
kubectl apply -f  glusterfs-pvc.yaml 

#查看
[[email protected]-m ~]# kubectl  get pvc
NAME              STATUS   VOLUME               CAPACITY   ACCESS MODES   STORAGECLASS   AGE
glusterfs-nginx   Bound    gluster-dev-volume   8Gi        RWX                           11m

6、创建挂载卷测试

[[email protected] ~]# cat nginx-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-dm
spec:
  replicas: 2
  template:
    metadata:
      labels:
        name: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:alpine
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80
          volumeMounts:
            - name: gluster-dev-volume
              mountPath: "/usr/share/nginx/html"
      volumes:
      - name: gluster-dev-volume
        persistentVolumeClaim:
          claimName: glusterfs-nginx

#导入
kubectl apply -f  nginx-deployment.yaml
#查看
[[email protected]-m ~]# kubectl  get pod
NAME                       READY   STATUS    RESTARTS   AGE
glusterfs                  1/1     Running   0          15m
nginx-dm-8df56c754-57kpp   1/1     Running   0          12m
nginx-dm-8df56c754-kgsbf   1/1     Running   0          12m

#进入一个pod测试
[[email protected]-m ~]# kubectl  exec -it  nginx-dm-8df56c754-kgsbf  -- /bin/sh
/ # ls /usr/share/nginx/html/
/ # cd  /usr/share/nginx/html/
/usr/share/nginx/html # touch 111.txt
/usr/share/nginx/html # ls
111.txt

#在node节点查看/data目录
[[email protected] ~]# ll /data/
total 4
-rw-r--r-- 2 root root 0 Jan 10 14:17 111.txt

原文地址:https://www.cnblogs.com/zhangb8042/p/10249715.html

时间: 2024-07-31 07:46:42

k8s使用glusterfs做存储的相关文章

Kubernetes使用Glusterfs做存储持久化

GlusterFS GlusterFS是一个开源的横向扩展文件系统.这些示例提供有关如何允许容器使用GlusterFS卷的信息. 该示例假定您已经设置了GlusterFS服务器集群,并且已准备好在容器中使用正在运行的GlusterFS卷. 先决条件Kubernetes集群已经搭建好. Glusterfs集群的安装环境介绍OS系统:Centos 7.xGlusterfs两个节点:192.168.22.21,192.168.22.22 安装glusterfs我们直接在物理机上使用yum安装,如果你选

k8s附加组件之存储-glusterfs

k8s附加组件之存储-glusterfs 2018/1/16 部署 glusterfs 集群 初始化 glusterfs 集群 创建 glusterfs 卷,用于保存数据 不做特别说明的,都在 67 上操作 集群节点 10.10.9.67 10.10.9.68 10.10.9.69 初始化 glusterfs 集群 ~]# yum install centos-release-gluster310 -y ~]# yum install glusterfs-server -y ~]# yum in

k8s使用glusterfs存储报错type ‘features/utime‘

k8s使用glusterfs存储报错type 'features/utime' is not valid or not found on this machine pods报错如下 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 5m48s default-scheduler Successfully assigned default/auth-web-1-77f4f7cbcc

k8s实践(七):存储卷和数据持久化(Volumes and Persistent Storage)

环境说明: 主机名 操作系统版本 ip docker version kubelet version 配置 备注 master Centos 7.6.1810 172.27.9.131 Docker 18.09.6 V1.14.2 2C2G master主机 node01 Centos 7.6.1810 172.27.9.135 Docker 18.09.6 V1.14.2 2C2G node节点 node02 Centos 7.6.1810 172.27.9.136 Docker 18.09.

openstack虚拟机做存储分区问题的解决方案之一

openstack实例存储分区的构建方案 对于在openstack的实例中做存储,不管是做cinder还是swift首先就是要解决分区问题.今天在openstack的实例中构建swift存储是就就遇到这样的问题.对于分区我们可以使用一下的几种方案: 构建共享存储,或者做iscsi存储服务器等 使用实例自带的硬盘进行分区 使用回环设备作为服务器的存储设备 在本地的服务器中构建swift使用openstack的云硬盘将其挂载至所需的实例中.(还未做测试,只是一种方案) 所用的文件系统官方推荐使用xf

烂泥:NFS做存储与KVM集成

本文由秀依林枫提供友情赞助,首发于烂泥行天下. 以前有关NFS的文章,我们介绍的都是NFS的使用挂载等等.这篇文章我们介绍有关NFS作为存储使用. 既然本篇文章的主题是有关NFS的,我们还是先把NFS服务器搭建完毕.具体搭建过程可参考<烂泥:NFS存储与VSphere配合使用>,这篇文章. 在此有关NFS配置文件/etc/exports中的几个参数,我们需要先介绍下: ro 该主机对该共享目录有只读权限. rw 该主机对该共享目录有读写权限,需要配合no_root_squash参数使用. ro

k8s使用Glusterfs动态生成pv

一.环境介绍 [[email protected] ~]# cat /etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.31.250.152 k8s-m172.31.250.153 node1172.31.250.154 node

k8s 对接glusterfs存储

service 与 endpoint 是通过namespace相同,name相同,相互关联的 创建endpoint [[email protected] glusterfs]# cat glusterfs-ep.yaml apiVersion: v1 kind: Endpoints metadata: name: glusterfs namespace: tomcat subsets: - addresses: - ip: 10.0.0.11 - ip: 10.0.0.12 - ip: 10.0

K8S使用Ceph做持久化存储

一.概述 Cephfs 是一个基于 ceph 集群且兼容POSIX标准的文件系统.创建 cephfs 文件系统时需要在 ceph 集群中添加 mds 服务,该服务负责处理 POSIX 文件系统中的 metadata 部分,实际的数据部分交由 ceph 集群中的 OSDs 处理.cephfs 支持以内核模块方式加载也支持 fuse 方式加载.无论是内核模式还是 fuse 模式,都是通过调用 libcephfs 库来实现 cephfs 文件系统的加载,而 libcephfs 库又调用 librado