一、主机规划
[[email protected] ~]# cat /etc/centos-release CentOS Linux release 7.4.1708 (Core)
角色 | IP地址 | 主机名 | 软件 |
服务端 | 20.0.20.101 | NK1 |
glusterfs glusterfs-fuse glusterfs-server |
20.0.20.102 | NK2 | ||
20.0.20.103 | NK3 | ||
20.0.20.104 | NK4 | ||
客户端 | 20.0.20.105 | NK5 |
glusterfs,glusterfs-fuse |
二、安装Glusterfs
1.配置hosts
[[email protected] ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 20.0.20.101 NK01 20.0.20.102 NK02 20.0.20.103 NK03 20.0.20.104 NK04 20.0.20.105 NK05
2.配置时间同步
[[email protected] ~]# timedatectl set-ntp true [[email protected] ~]# timedatectl set-timezone "Asia/Shanghai"
3.安装epel源
[[email protected] ~]# yum -y install epel-release
4.配置yum源
[[email protected] ~]# cat /etc/yum.repos.d/gluster.repo [gluster] name=gluster baseurl=https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-4.0/ gpgcheck=0 enabled=1
5.安装glusterfs并设置开机启动
[[email protected] ~]# yum -y install glusterfs glusterfs-fuse glusterfs-server [[email protected] ~]# systemctl start glusterd.service [[email protected] ~]# systemctl enable glusterd.service Created symlink from /etc/systemd/system/multi-user.target.wants/glusterd.service to /usr/lib/systemd/system/glusterd.service.
三、配置Glusterfs服务(仅在一台节点上操作)
1.添加glusterfs节点
[[email protected] ~]# gluster peer probe NK02 peer probe: success. [[email protected] ~]# gluster peer probe NK03 peer probe: success. [[email protected] ~]# gluster peer probe NK04 peer probe: success. [[email protected] ~]# gluster pool list UUID Hostname State fc8cad5e-9f38-4e60-9be3-438bcfab74b6 NK02 Connected 45f7ec96-950f-4b05-903c-e843133e15f6 NK03 Connected 1ce58661-db17-435c-8195-23bc8300abd6 NK04 Connected d6e7827c-c473-4228-8060-3796d198d484 localhost Connected
2.新建分布式复制卷
[[email protected] ~]# mkdir /home/drv1 [[email protected] ~]# gluster volume create drv1 replica 2 > NK01:/home/drv1 > NK02:/home/drv1 > NK03:/home/drv1 > NK04:/home/drv1 Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain Do you still want to continue? (y/n) Y volume create: drv1: success: please start the volume to access data
3.查看卷信息
[[email protected] ~]# gluster volume info Volume Name: drv1 Type: Distributed-Replicate Volume ID: 24b0f09f-2a50-41dc-bff2-fc60d707b1eb Status: Created Snapshot Count: 0 Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: NK01:/home/drv1 Brick2: NK02:/home/drv1 Brick3: NK03:/home/drv1 Brick4: NK04:/home/drv1 Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off
4.启动卷
[[email protected] ~]# gluster volume start drv1 volume start: drv1: success
四、客户端挂载
1.安装glusterfs
[[email protected] ~]# yum -y install glusterfs glusterfs-fuse
2.挂载glusterfs
[[email protected] ~]# mount -t glusterfs NK01:/drv1 /mnt/drv1 [[email protected] ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos-root 50G 1.3G 49G 3% / devtmpfs 1.9G 0 1.9G 0% /dev tmpfs 1.9G 0 1.9G 0% /dev/shm tmpfs 1.9G 8.6M 1.9G 1% /run tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/sda1 1014M 145M 870M 15% /boot /dev/mapper/centos-home 142G 33M 142G 1% /home tmpfs 378M 0 378M 0% /run/user/0 NK01:/drv1 283G 2.9G 280G 2% /mnt/drv1
3.设置开机自动挂载
[[email protected] ~]# echo 'NK01:/drv1 /mnt/drv1 glusterfs defaults,_netdev 0 0' >> /etc/fstab
五、使用fio读写测试
1.安装fio
[[email protected] ~]# yum -y install fio
2.随机读写测试
[[email protected] ~]# fio -filename=/mnt/drv1/test1 -iodepth=64 -ioengine=libaio -direct=1 -rw=randrw -bs=512 -size=2G -numjobs=64 -runtime=30 -group_reporting -name=test-randrw test-randrw: (g=0): rw=randrw, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=64 ... fio-3.1 Starting 64 processes test-randrw: Laying out IO file (1 file / 2048MiB) Jobs: 64 (f=64): [m(64)][100.0%][r=90KiB/s,w=104KiB/s][r=180,w=208 IOPS][eta 00m:00s] test-randrw: (groupid=0, jobs=64): err= 0: pid=2585: Tue Apr 24 14:59:42 2018 read: IOPS=270, BW=135KiB/s (139kB/s)(4081KiB/30144msec) slat (usec): min=64, max=997348, avg=196866.66, stdev=200571.09 clat (msec): min=25, max=11006, avg=6492.66, stdev=2199.36 lat (msec): min=42, max=11017, avg=6689.53, stdev=2206.83 clat percentiles (msec): | 1.00th=[ 109], 5.00th=[ 1636], 10.00th=[ 2769], 20.00th=[ 5336], | 30.00th=[ 6074], 40.00th=[ 6678], 50.00th=[ 7080], 60.00th=[ 7416], | 70.00th=[ 7819], 80.00th=[ 8154], 90.00th=[ 8658], 95.00th=[ 9060], | 99.00th=[ 9731], 99.50th=[10000], 99.90th=[10537], 99.95th=[10805], | 99.99th=[11073] bw ( KiB/s): min= 0, max= 12, per=1.90%, avg= 2.57, stdev= 1.54, samples=2341 iops : min= 1, max= 24, avg= 5.14, stdev= 3.08, samples=2341 write: IOPS=267, BW=134KiB/s (137kB/s)(4035KiB/30144msec) slat (usec): min=43, max=442035, avg=37446.58, stdev=57804.41 clat (usec): min=41, max=10670k, avg=6525462.61, stdev=2085318.90 lat (msec): min=23, max=10773, avg=6562.91, stdev=2086.55 clat percentiles (msec): | 1.00th=[ 359], 5.00th=[ 1653], 10.00th=[ 3104], 20.00th=[ 5336], | 30.00th=[ 6208], 40.00th=[ 6678], 50.00th=[ 7013], 60.00th=[ 7416], | 70.00th=[ 7752], 80.00th=[ 8087], 90.00th=[ 8557], 95.00th=[ 8926], | 99.00th=[ 9731], 99.50th=[10000], 99.90th=[10402], 99.95th=[10671], | 99.99th=[10671] bw ( KiB/s): min= 0, max= 10, per=1.92%, avg= 2.55, stdev= 1.53, samples=2384 iops : min= 1, max= 20, avg= 5.11, stdev= 3.07, samples=2384 lat (usec) : 50=0.01%, 100=0.02%, 250=0.04% lat (msec) : 4=0.01%, 10=0.01%, 20=0.02%, 50=0.17%, 100=0.36% lat (msec) : 250=0.15%, 500=0.92%, 750=1.08%, 1000=0.43%, 2000=2.96% lat (msec) : >=2000=93.80% cpu : usr=0.00%, sys=0.01%, ctx=27957, majf=0, minf=2065 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.2%, 16=6.3%, 32=12.6%, >=64=75.2% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.5%, >=64=0.0% issued rwt: total=8162,8069,0, short=0,0,0, dropped=0,0,0 latency : target=0, window=0, percentile=100.00%, depth=64 Run status group 0 (all jobs): READ: bw=135KiB/s (139kB/s), 135KiB/s-135KiB/s (139kB/s-139kB/s), io=4081KiB (4179kB), run=30144-30144msec WRITE: bw=134KiB/s (137kB/s), 134KiB/s-134KiB/s (137kB/s-137kB/s), io=4035KiB (4131kB), run=30144-30144msec
3.查看文件生成情况
[[email protected] drv1]# ll -h total 2.0G -rw-r--r-- 2 root root 2.0G Apr 24 14:59 test1 [[email protected] drv1]# ll -h total 2.0G -rw-r--r-- 2 root root 2.0G Apr 24 14:59 test1 [[email protected] drv1]# ll -h total 0 [[email protected] drv1]# ll -h total 0
六、出现的问题
问题描述:由于前期在/etc/sysctl.conf中把IPv6禁止了,导致系统重启后glusterfsd服务无法启动。发现是由于rpcbind默认监听IPv6的端口,导致rpcbind无法启动。
解决方法:
- 把IPv6开启
- 在rpcbind的配置文件中将IPv6的监听注释
[[email protected] ~]# cat /etc/systemd/system/sockets.target.wants/rpcbind.socket [Unit] Description=RPCbind Server Activation Socket [Socket] ListenStream=/var/run/rpcbind.sock #ListenStream=[::]:111 ListenStream=0.0.0.0:111 BindIPv6Only=ipv6-only [Install] WantedBy=sockets.target [[email protected] ~]# systemctl daemon-reload
原文地址:http://blog.51cto.com/lullaby/2107240
时间: 2024-10-07 13:10:21