实验环境
#开启5台虚拟机(centos7),四条节点服务器分别添加四块硬盘
node1:192.168.52.149
node2:192.168.52.132
node3:192.168.52.128
node4:192.168.52.135
client:192.168.52.133
#分别设置虚拟机名称,方便识别
hostnamectl set-hostname node1
hostnamectl set-hostname node2
hostnamectl set-hostname node3
hostnamectl set-hostname node4
hostnamectl set-hostname client
实验操作
1、在node1上添加自动挂载磁盘的脚本,并推送到其它node节点
[[email protected] ~]# cd /opt/
[[email protected] opt]# vim disk.sh
#! /bin/bash
echo "the disks exist list:"
fdisk -l |grep ‘磁盘 /dev/sd[a-z]‘
echo "=================================================="
PS3="chose which disk you want to create:"
select VAR in `ls /dev/sd*|grep -o ‘sd[b-z]‘|uniq` quit
do
case $VAR in
sda)
fdisk -l /dev/sda
break ;;
sd[b-z])
#create partitions
echo "n
p
w" | fdisk /dev/$VAR
#make filesystem
mkfs.xfs -i size=512 /dev/${VAR}"1" &> /dev/null
#mount the system
mkdir -p /data/${VAR}"1" &> /dev/null
echo -e "/dev/${VAR}"1" /data/${VAR}"1" xfs defaults 0 0\n" >> /etc/fstab
mount -a &> /dev/null
break ;;
quit)
break;;
*)
echo "wrong disk,please check again";;
esac
done
[[email protected] opt]# chmod +x disk.sh ##添加执行权限
[[email protected] opt]# scp disk.sh [email protected]:/opt/ ##推送到node2
[[email protected] opt]# scp disk.sh [email protected]:/opt/ ##推送到node3
[[email protected] opt]# scp disk.sh [email protected]:/opt/ ##推送到node4
2、分别在四个node节点上执行脚本"disk.sh"脚本,将添加的磁盘挂载好
[[email protected] opt]# ./disk.sh
##按脚本提示,分别挂载四块新磁盘
[[email protected] opt]# df -hT ##查看挂载是否成功
文件系统 类型 容量 已用 可用 已用% 挂载点
/dev/sda2 xfs 20G 3.3G 17G 17% /
devtmpfs devtmpfs 898M 0 898M 0% /dev
tmpfs tmpfs 912M 0 912M 0% /dev/shm
tmpfs tmpfs 912M 9.0M 903M 1% /run
tmpfs tmpfs 912M 0 912M 0% /sys/fs/cgroup
/dev/sda3 xfs 10G 33M 10G 1% /home
/dev/sda1 xfs 6.0G 174M 5.9G 3% /boot
tmpfs tmpfs 183M 12K 183M 1% /run/user/42
tmpfs tmpfs 183M 0 183M 0% /run/user/0
/dev/sdc1 xfs 20G 33M 20G 1% /data/sdc1
/dev/sdd1 xfs 20G 33M 20G 1% /data/sdd1
/dev/sde1 xfs 20G 33M 20G 1% /data/sde1
/dev/sdb1 xfs 20G 33M 20G 1% /data/sdb1
[[email protected] opt]#
##这是node1节点的演示,其它节点相同操作,不做演示
3、在所有虚拟机中配置主机名解析(包括client)
[[email protected] opt]# vim /etc/hosts
##末行添加
192.168.52.149 node1
192.168.52.132 node2
192.168.52.128 node3
192.168.52.135 node4
[[email protected] opt]# scp /etc/hosts [email protected]:/etc/hosts ##推送到node2
[email protected]‘s password:
hosts 100% 242 27.2KB/s 00:00
[[email protected] opt]# scp /etc/hosts [email protected]:/etc/hosts ##推送到node3
[email protected]‘s password:
hosts 100% 242 232.7KB/s 00:00
[[email protected] opt]# scp /etc/hosts [email protected]:/etc/hosts ##推送到node4
[email protected]‘s password:
hosts 100% 242 24.3KB/s 00:00
[[email protected] opt]# scp /etc/hosts [email protected]:/etc/hosts ##推送到client
[email protected]‘s password:
hosts 100% 242 192.7KB/s 00:00
4、分别在四个node节点上配置本地yum仓库,安装所需软件包
#四个node操作相同
[[email protected] opt]# mkdir /abc
[[email protected] opt]# mount.cifs //192.168.100.100/tools /abc/ ##挂载
Password for [email protected]//192.168.100.100/tools:
[[email protected] opt]# cd /etc/yum.repos.d/
[[email protected] yum.repos.d]# ls
CentOS-Base.repo CentOS-Debuginfo.repo CentOS-Media.repo CentOS-Vault.repo
CentOS-CR.repo CentOS-fasttrack.repo CentOS-Sources.repo
[[email protected] yum.repos.d]# mkdir bak
[[email protected] yum.repos.d]# mv CentOS-* bak/ ##备份原有文件
[[email protected] yum.repos.d]# ls
bak
[[email protected] yum.repos.d]# vim GLFS.repo ##创建yum仓库
[GLFS]
name=glfs
baseurl=file:///abc/gfsrepo
gpgcheck=0
enabled=1
[[email protected] yum.repos.d]# yum list ##加载列表
[[email protected] yum.repos.d]# yum install glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma -y ##安装
5、分别在四个node节点开启glusterd服务
[[email protected] yum.repos.d]# cd
[[email protected] ~]# systemctl stop firewalld.service ##关闭防火墙
[[email protected] ~]# setenforce 0 ##关闭增强型安全功能
[[email protected] ~]#
[[email protected] ~]# systemctl start glusterd.service ##开启服务
[[email protected] ~]# systemctl enable glusterd.service ##设置开启自启
Created symlink from /etc/systemd/system/multi-user.target.wants/glusterd.service to /usr/lib/systemd/system/glusterd.service.
[[email protected] ~]# ntpdate ntp1.aliyun.com ##同步时间
19 Dec 07:05:18 ntpdate[2651]: adjust time server 120.25.115.20 offset -0.000497 sec
[[email protected] ~]#
6、在任意一台node上,添加其他节点
##在一台节点上添加即可
[[email protected] ~]# gluster peer probe node2 ##添加node2
peer probe: success.
[[email protected] ~]# gluster peer probe node3 ##添加node3
peer probe: success.
[[email protected] ~]# gluster peer probe node4 ##添加node4
peer probe: success.
[[email protected] ~]#
[[email protected] ~]# gluster peer status ##查看状态
Number of Peers: 3
Hostname: node2
Uuid: 4f077ec5-212b-4b67-bf3d-46a811bf9aca
State: Peer in Cluster (Connected)
Hostname: node3
Uuid: 1cdb5a37-8a26-4e7a-b7c8-ae092809d220
State: Peer in Cluster (Connected)
Hostname: node4
Uuid: 0b5d9651-808c-4fc9-8e0a-47d75952b267
State: Peer in Cluster (Connected)
[[email protected] ~]#
7、在client主机上安装glusterfs工具包
[[email protected] ~]# systemctl stop firewalld.service ##关闭防火墙
[[email protected] ~]# setenforce 0 ##关闭增强型安全功能
[[email protected] ~]#
[[email protected] ~]# mkdir /abc
[[email protected] ~]# mount.cifs //192.168.100.100/tools /abc/ ##挂载
Password for [email protected]//192.168.100.100/tools:
[[email protected] ~]# cd /etc/yum.repos.d/
[[email protected] yum.repos.d]# ls
CentOS-Base.repo CentOS-Debuginfo.repo CentOS-Media.repo CentOS-Vault.repo
CentOS-CR.repo CentOS-fasttrack.repo CentOS-Sources.repo
[[email protected] yum.repos.d]# mkdir bak
[[email protected] yum.repos.d]# mv CentOS-* bak/ ##备份文件
[[email protected] yum.repos.d]# ls
bak
[[email protected] yum.repos.d]# vim GLFS.repo ##创建本地yum仓库
[GLFS]
name=glfs
baseurl=file:///abc/gfsrepo
gpgcheck=0
enabled=1
[[email protected] yum.repos.d]# yum list ##加载yum列表
[[email protected] yum.repos.d]# yum install glusterfs glusterfs-fuse -y ##安装工具包
分布式卷
1、创建
[[email protected] ~]# gluster volume create dis-vol node1:/data/sdb1 node2:/data/sdb1 force
volume create: dis-vol: success: please start the volume to access data
2、查看卷组信息
[[email protected] ~]# gluster volume info dis-vol
Volume Name: dis-vol
Type: Distribute
Volume ID: 03b2da8d-516a-45a8-9d2e-e6c84d2a389c
Status: Created
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: node1:/data/sdb1
Brick2: node2:/data/sdb1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
3、查看卷组列表
[[email protected] ~]# gluster volume list
dis-vol
4、启动卷组
[[email protected] ~]# gluster volume start dis-vol
volume start: dis-vol: success
5、查看卷组状态
[[email protected] ~]# gluster volume status dis-vol
Status of volume: dis-vol
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick node1:/data/sdb1 49152 0 Y 2742
Brick node2:/data/sdb1 49152 0 Y 2663
Task Status of Volume dis-vol
------------------------------------------------------------------------------
There are no active volume tasks
6、客户端挂载
[[email protected] ~]# mkdir -p /text/dis ##创建挂载点
[[email protected] ~]# mount.glusterfs node1:dis-vol /text/dis/ ##挂载
[[email protected] ~]# df -hT
文件系统 类型 容量 已用 可用 已用% 挂载点
/dev/sda2 xfs 20G 3.3G 17G 17% /
devtmpfs devtmpfs 898M 0 898M 0% /dev
tmpfs tmpfs 912M 0 912M 0% /dev/shm
tmpfs tmpfs 912M 9.0M 903M 1% /run
tmpfs tmpfs 912M 0 912M 0% /sys/fs/cgroup
/dev/sda5 xfs 10G 37M 10G 1% /home
/dev/sda1 xfs 6.0G 174M 5.9G 3% /boot
tmpfs tmpfs 183M 12K 183M 1% /run/user/42
tmpfs tmpfs 183M 0 183M 0% /run/user/0
//192.168.100.100/tools cifs 311G 93G 218G 30% /abc
node1:dis-vol fuse.glusterfs 40G 65M 40G 1% /text/dis ##挂载成功
[[email protected] ~]#
条带卷
1、创建
[[email protected] ~]# gluster volume create stripe-vol stripe 2 node1:/data/sdc1 node2:/data/sdc1 force
volume create: stripe-vol: success: please start the volume to access data
2、查看卷组列表
[[email protected] ~]# gluster volume list
dis-vol
stripe-vol
3、查看卷组信息
[[email protected] ~]# gluster volume info stripe-vol
Volume Name: stripe-vol
Type: Stripe
Volume ID: 64a01a34-2cd7-4e76-b960-c5399a1e9158
Status: Created
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: node1:/data/sdc1
Brick2: node2:/data/sdc1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
4、启动卷组
[[email protected] ~]# gluster volume start stripe-vol
volume start: stripe-vol: success
5、客户端挂载
[[email protected] ~]# mkdir /text/strip ##创建挂载点
[[email protected] ~]# mount.glusterfs node1:stripe-vol /text/strip/ ##挂载
[[email protected] ~]# df -hT
文件系统 类型 容量 已用 可用 已用% 挂载点
/dev/sda2 xfs 20G 3.3G 17G 17% /
devtmpfs devtmpfs 898M 0 898M 0% /dev
tmpfs tmpfs 912M 0 912M 0% /dev/shm
tmpfs tmpfs 912M 9.0M 903M 1% /run
tmpfs tmpfs 912M 0 912M 0% /sys/fs/cgroup
/dev/sda5 xfs 10G 37M 10G 1% /home
/dev/sda1 xfs 6.0G 174M 5.9G 3% /boot
tmpfs tmpfs 183M 12K 183M 1% /run/user/42
tmpfs tmpfs 183M 0 183M 0% /run/user/0
//192.168.100.100/tools cifs 311G 93G 218G 30% /abc
node1:dis-vol fuse.glusterfs 40G 65M 40G 1% /text/dis
node1:stripe-vol fuse.glusterfs 40G 65M 40G 1% /text/strip ##挂载成功
[[email protected] ~]#
复制卷
1、创建
[[email protected] ~]# gluster volume create rep-vol replica 2 node3:/data/sdb1 node4:/data/sdb1 force
volume create: rep-vol: success: please start the volume to access data
2、查看卷组列表
[[email protected] ~]# gluster volume list
dis-vol
rep-vol
stripe-vol
3、启动卷组
[[email protected] ~]# gluster volume start rep-vol
volume start: rep-vol: success
4、查看卷组信息
[[email protected] ~]# gluster volume info rep-vol
Volume Name: rep-vol
Type: Replicate
Volume ID: 161ab1f4-f738-43a7-89c9-b07c25e57a5b
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: node3:/data/sdb1
Brick2: node4:/data/sdb1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
[[email protected] ~]#
5、客户端挂载
[[email protected] ~]# mkdir /text/rep
[[email protected] ~]# mount.glusterfs node3:rep-vol /text/rep/
[[email protected] ~]# df -hT
文件系统 类型 容量 已用 可用 已用% 挂载点
/dev/sda2 xfs 20G 3.3G 17G 17% /
devtmpfs devtmpfs 898M 0 898M 0% /dev
tmpfs tmpfs 912M 0 912M 0% /dev/shm
tmpfs tmpfs 912M 9.0M 903M 1% /run
tmpfs tmpfs 912M 0 912M 0% /sys/fs/cgroup
/dev/sda5 xfs 10G 37M 10G 1% /home
/dev/sda1 xfs 6.0G 174M 5.9G 3% /boot
tmpfs tmpfs 183M 12K 183M 1% /run/user/42
tmpfs tmpfs 183M 0 183M 0% /run/user/0
//192.168.100.100/tools cifs 311G 93G 218G 30% /abc
node1:dis-vol fuse.glusterfs 40G 65M 40G 1% /text/dis
node1:stripe-vol fuse.glusterfs 40G 65M 40G 1% /text/strip
node3:rep-vol fuse.glusterfs 20G 33M 20G 1% /text/rep
[[email protected] ~]#
分布式条带卷
1、创建
[[email protected] ~]# gluster volume create dis-stripe stripe 2 node1:/data/sdd1 node2:/data/sdd1 node3:/data/sdd1 node4:/data/sdd1 force
volume create: dis-stripe: success: please start the volume to access data
2、开启卷组
[[email protected] ~]# gluster volume start dis-stripe
volume start: dis-stripe: success
3、查看卷组信息
[[email protected] ~]# gluster volume info dis-stripe
Volume Name: dis-stripe
Type: Distributed-Stripe
Volume ID: fb3949d3-591d-4379-8b52-3202bb206762
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: node1:/data/sdd1
Brick2: node2:/data/sdd1
Brick3: node3:/data/sdd1
Brick4: node4:/data/sdd1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
[[email protected] ~]#
4、客户端挂载
[[email protected] ~]# mkdir /text/dis-stripe
[[email protected] ~]# mount.glusterfs node2:dis-stripe /text/dis-stripe
[[email protected] ~]# df -hT
文件系统 类型 容量 已用 可用 已用% 挂载点
/dev/sda2 xfs 20G 3.3G 17G 17% /
devtmpfs devtmpfs 898M 0 898M 0% /dev
tmpfs tmpfs 912M 0 912M 0% /dev/shm
tmpfs tmpfs 912M 9.0M 903M 1% /run
tmpfs tmpfs 912M 0 912M 0% /sys/fs/cgroup
/dev/sda5 xfs 10G 37M 10G 1% /home
/dev/sda1 xfs 6.0G 174M 5.9G 3% /boot
tmpfs tmpfs 183M 12K 183M 1% /run/user/42
tmpfs tmpfs 183M 0 183M 0% /run/user/0
//192.168.100.100/tools cifs 311G 93G 218G 30% /abc
node1:dis-vol fuse.glusterfs 40G 65M 40G 1% /text/dis
node1:stripe-vol fuse.glusterfs 40G 65M 40G 1% /text/strip
node3:rep-vol fuse.glusterfs 20G 33M 20G 1% /text/rep
node2:dis-stripe fuse.glusterfs 80G 130M 80G 1% /text/dis-stripe
[[email protected] ~]#
分布式复制卷
1、创建
[[email protected] ~]# gluster volume create die-replica replica 2 node1:/data/sde1 node2:/data/sde1 node3:/data/sde1 node4:/data/sde1 force
volume create: die-replica: success: please start the volume to access data
2、启动卷组
[[email protected] ~]# gluster volume start die-replica
volume start: die-replica: success
3、查看卷组信息
[[email protected] ~]# gluster volume info die-replica
Volume Name: die-replica
Type: Distributed-Replicate
Volume ID: 6b5af491-229e-4342-ab28-454551304b38
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: node1:/data/sde1
Brick2: node2:/data/sde1
Brick3: node3:/data/sde1
Brick4: node4:/data/sde1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
4、查看卷组列表
[[email protected] ~]# gluster volume list
die-replica
dis-stripe
dis-vol
rep-vol
stripe-vol
[[email protected] ~]#
5、客户端挂载
[[email protected] ~]# mkdir /text/dis-replica
[[email protected] ~]# mount.glusterfs node4:die-replica /text/dis-replica
[[email protected] ~]# df -hT
文件系统 类型 容量 已用 可用 已用% 挂载点
/dev/sda2 xfs 20G 3.3G 17G 17% /
devtmpfs devtmpfs 898M 0 898M 0% /dev
tmpfs tmpfs 912M 0 912M 0% /dev/shm
tmpfs tmpfs 912M 9.0M 903M 1% /run
tmpfs tmpfs 912M 0 912M 0% /sys/fs/cgroup
/dev/sda5 xfs 10G 37M 10G 1% /home
/dev/sda1 xfs 6.0G 174M 5.9G 3% /boot
tmpfs tmpfs 183M 12K 183M 1% /run/user/42
tmpfs tmpfs 183M 0 183M 0% /run/user/0
//192.168.100.100/tools cifs 311G 93G 218G 30% /abc
node1:dis-vol fuse.glusterfs 40G 65M 40G 1% /text/dis
node1:stripe-vol fuse.glusterfs 40G 65M 40G 1% /text/strip
node3:rep-vol fuse.glusterfs 20G 33M 20G 1% /text/rep
node2:dis-stripe fuse.glusterfs 80G 130M 80G 1% /text/dis-stripe
node4:die-replica fuse.glusterfs 40G 65M 40G 1% /text/dis-replica
[[email protected] ~]#
数据存储测试
1、在client上创建5个测试文件,demo1、demo2、demo3、demo4、demo5
[[email protected] ~]# dd if=/dev/zero of=/demo1.log bs=1M count=40
记录了40+0 的读入
记录了40+0 的写出
41943040字节(42 MB)已复制,0.10165 秒,413 MB/秒
[[email protected] ~]# dd if=/dev/zero of=/demo2.log bs=1M count=40
记录了40+0 的读入
记录了40+0 的写出
41943040字节(42 MB)已复制,0.234021 秒,179 MB/秒
[[email protected] ~]# dd if=/dev/zero of=/demo3.log bs=1M count=40
记录了40+0 的读入
记录了40+0 的写出
41943040字节(42 MB)已复制,0.267065 秒,157 MB/秒
[[email protected] ~]# dd if=/dev/zero of=/demo4.log bs=1M count=40
记录了40+0 的读入
记录了40+0 的写出
41943040字节(42 MB)已复制,0.256854 秒,163 MB/秒
[[email protected] ~]# dd if=/dev/zero of=/demo5.log bs=1M count=40
记录了40+0 的读入
记录了40+0 的写出
41943040字节(42 MB)已复制,0.43331 秒,96.8 MB/秒
[[email protected] ~]#
[[email protected] ~]# ll -h /
-rw-r--r--. 1 root root 40M 12月 19 07:29 demo1.log
-rw-r--r--. 1 root root 40M 12月 19 07:29 demo2.log
-rw-r--r--. 1 root root 40M 12月 19 07:29 demo3.log
-rw-r--r--. 1 root root 40M 12月 19 07:29 demo4.log
-rw-r--r--. 1 root root 40M 12月 19 07:30 demo5.log
2、分别复制到五种卷组的挂载点
[[email protected] ~]# cp /demo* /text/dis
[[email protected] ~]# cp /demo* /text/strip
[[email protected] ~]# cp /demo* /text/rep
[[email protected] ~]# cp /demo* /text/dis-stripe
[[email protected] ~]# cp /demo* /text/dis-replica
[[email protected] ~]#
3、查看分布式卷,通过哈希算法分布式存储
[[email protected] ~]# ll -h /data/sdb1/
总用量 160M
-rw-r--r--. 2 root root 40M 12月 19 07:33 demo1.log
-rw-r--r--. 2 root root 40M 12月 19 07:33 demo2.log
-rw-r--r--. 2 root root 40M 12月 19 07:33 demo3.log
-rw-r--r--. 2 root root 40M 12月 19 07:33 demo4.log
[[email protected] ~]#
[[email protected] ~]# ll -h /data/sdb1/
总用量 40M
-rw-r--r--. 2 root root 40M 12月 19 07:33 demo5.log
[[email protected] ~]#
4、查看条带卷,两个节点各存储一半
[[email protected] ~]# ll -h /data/sdc1
总用量 100M
-rw-r--r--. 2 root root 20M 12月 19 07:33 demo1.log
-rw-r--r--. 2 root root 20M 12月 19 07:33 demo2.log
-rw-r--r--. 2 root root 20M 12月 19 07:33 demo3.log
-rw-r--r--. 2 root root 20M 12月 19 07:33 demo4.log
-rw-r--r--. 2 root root 20M 12月 19 07:33 demo5.log
[[email protected] ~]#
[[email protected] ~]# ll -h /data/sdc1
总用量 100M
-rw-r--r--. 2 root root 20M 12月 19 07:33 demo1.log
-rw-r--r--. 2 root root 20M 12月 19 07:33 demo2.log
-rw-r--r--. 2 root root 20M 12月 19 07:33 demo3.log
-rw-r--r--. 2 root root 20M 12月 19 07:33 demo4.log
-rw-r--r--. 2 root root 20M 12月 19 07:33 demo5.log
[[email protected] ~]#
5、查看复制卷,两个节点分别存放两份完整的数据 相当于镜像卷
[[email protected] ~]# ll -h /data/sdb1
总用量 200M
-rw-r--r--. 2 root root 40M 12月 19 07:33 demo1.log
-rw-r--r--. 2 root root 40M 12月 19 07:33 demo2.log
-rw-r--r--. 2 root root 40M 12月 19 07:33 demo3.log
-rw-r--r--. 2 root root 40M 12月 19 07:33 demo4.log
-rw-r--r--. 2 root root 40M 12月 19 07:33 demo5.log
[[email protected] ~]#
[[email protected] ~]# ll -h /data/sdb1
总用量 200M
-rw-r--r--. 2 root root 40M 12月 19 07:33 demo1.log
-rw-r--r--. 2 root root 40M 12月 19 07:33 demo2.log
-rw-r--r--. 2 root root 40M 12月 19 07:33 demo3.log
-rw-r--r--. 2 root root 40M 12月 19 07:33 demo4.log
-rw-r--r--. 2 root root 40M 12月 19 07:33 demo5.log
[[email protected] ~]#
6、查看分布式条带卷,根据哈希算法将部分数据分布到前两个节点各存储一般,另一部分数据分配到另外两个节点各存储一半
[[email protected] ~]# ll -h /data/sdd1/
总用量 80M
-rw-r--r--. 2 root root 20M 12月 19 07:34 demo1.log
-rw-r--r--. 2 root root 20M 12月 19 07:34 demo2.log
-rw-r--r--. 2 root root 20M 12月 19 07:34 demo3.log
-rw-r--r--. 2 root root 20M 12月 19 07:34 demo4.log
[[email protected] ~]#
[[email protected] ~]# ll -h /data/sdd1/
总用量 80M
-rw-r--r--. 2 root root 20M 12月 19 07:34 demo1.log
-rw-r--r--. 2 root root 20M 12月 19 07:34 demo2.log
-rw-r--r--. 2 root root 20M 12月 19 07:34 demo3.log
-rw-r--r--. 2 root root 20M 12月 19 07:34 demo4.log
[[email protected] ~]#
[[email protected] ~]# ll -h /data/sdd1/
总用量 20M
-rw-r--r--. 2 root root 20M 12月 19 07:34 demo5.log
[[email protected] ~]#
[[email protected] ~]# ll -h /data/sdd1/
总用量 20M
-rw-r--r--. 2 root root 20M 12月 19 07:34 demo5.log
[[email protected] ~]#
7、查看分布式复制卷,通过哈希算分布在两个节点上存储,另外两个节点分别保存副本
[[email protected] ~]# ll -h /data/sde1/
总用量 160M
-rw-r--r--. 2 root root 40M 12月 19 07:34 demo1.log
-rw-r--r--. 2 root root 40M 12月 19 07:34 demo2.log
-rw-r--r--. 2 root root 40M 12月 19 07:34 demo3.log
-rw-r--r--. 2 root root 40M 12月 19 07:34 demo4.log
[[email protected] ~]#
[[email protected] ~]# ll -h /data/sde1/
总用量 160M
-rw-r--r--. 2 root root 40M 12月 19 07:34 demo1.log
-rw-r--r--. 2 root root 40M 12月 19 07:34 demo2.log
-rw-r--r--. 2 root root 40M 12月 19 07:34 demo3.log
-rw-r--r--. 2 root root 40M 12月 19 07:34 demo4.log
[[email protected] ~]#
[[email protected] ~]# ll -h /data/sde1/
总用量 40M
-rw-r--r--. 2 root root 40M 12月 19 07:34 demo5.log
[[email protected] ~]#
[[email protected] ~]# ll -h /data/sde1/
总用量 40M
-rw-r--r--. 2 root root 40M 12月 19 07:34 demo5.log
[[email protected] ~]#
故障测试
1、关闭node2节点,模拟故障
[[email protected] ~]# init 0
2、在client主机上查看文件
[[email protected] ~]# ls /text/dis
demo1.log demo2.log demo3.log demo4.log #分布式存储中5没有了
[[email protected] ~]# ls /text/rep
demo1.log demo2.log demo3.log demo4.log demo5.log #复制卷保存完整
[[email protected] ~]# ls /text/dis-replica/ #分布式复制卷保存信息完整
demo1.log demo2.log demo3.log demo4.log demo5.log
[[email protected] ~]# ls /text/dis-stripe/ #分布式条带卷中1、2、3、4数据丢失
demo5.log
[[email protected] ~]# ls /text/strip/ #条带卷中数据全部丢失
3、删除卷(注意开启node2)
[[email protected] ~]# gluster volume list
die-replica
dis-stripe
dis-vol
rep-vol
stripe-vol
[[email protected] ~]# gluster volume stop rep-vol ##关闭卷组(删除卷组前,需要关闭)
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: rep-vol: success
[[email protected] ~]# gluster volume delete rep-vol ##删除卷组
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: rep-vol: success
[[email protected] ~]# gluster volume list
die-replica
dis-stripe
dis-vol
stripe-vol
[[email protected] ~]#
4、访问控制
gluster volume set dis-vol auth.reject 192.168.52.133 #设置拒绝主机访问、挂载
gluster volume set dis-vol auth.allow 192.168.52.133 #设置允许主机访问、挂载
原文地址:https://blog.51cto.com/14449541/2462123
时间: 2024-10-14 03:21:51