1.zypper 安装各种库
zypper in bison openssl* libacl* sqlite libxml2*
zypper in libxml++* fuse fuse-devel
zyypper in openssl-devel libaio-devel bison bison-devel flex systemtap-sdt-devel readline-devel
cd /home/src/glusterfs-3.8.9
./configure --prefix=/home/rzrk/server/glusterfs
报错:
configure: error: libxml2 devel libraries not found
改不出来这个悲伤。。。
configure: error: pass --disable-tiering to build without sqlite
./configure --prefix=/home/rzrk/server/glusterfs --disable-tiering --这样编译吧
反正最后还是没编过 也没啥报错
查看内核文件是否挂载
# lsmod |grep fuse
fuse 95758 3
2.源码的编不过
zypper in glusterfs
zypper in glusterfs-devel
lsb_release -a 可以先看下操作系统
LSB Version: n/a
Distributor ID: SUSE LINUX
Description: SUSE Linux Enterprise Server 12 SP1
Release: 12.1
Codename: n/a
一 、 rpm安装
http://blog.csdn.net/zzulp/article/details/39527441#
---史上最牛逼文档哈哈哈哈按照这个做出来的
http://blog.csdn.net/liuaigui/article/details/6284551 ---原理在这里
lsb_release -a 可以先看下操作系统
LSB Version: n/a
Distributor ID: SUSE LINUX
Description: SUSE Linux Enterprise Server 12 SP1
Release: 12.1
Codename: n/a
1 这个是zypper源从官网下的
zypper ar http://download.opensuse.org/repositories/home:/kkeithleatredhat:/SLES12-3.8/SLE_12_SP2/ glusterfs
zypper refresh
zypper in glusterfs-3.8.10 libgfapi0-3.8.10 libgfchangelog0-3.8.10 libgfrpc0-3.8.10 libgfxdr0-3.8.10 libglusterfs0-3.8.10 glusterfs-3.8.10
上面的库都要装要不然会有问题 的。。。
项目要求:
集群 四个点 每两个点互备
加油呀芷晴xi~~
四台机器: 4 18做个集群
172.30.5.4
172.30.5.17
172.30.5.18
172.30.5.19
4-17互备
18-19互备 17,19客户端
2 启动服务
# service glusterd start
ps -ef |grep glusterd
root 78162 1 0 16:31 ? 00:00:00 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
# netstat -tunlp|grep gluster
tcp 0 0 0.0.0.0:24007 0.0.0.0:* LISTEN 78162/glusterd
罢了 我就是试试能不能启动
- #如果需要在系统启动时开启glusterd
- chkconfig glusterd on
- yum install glusterfs{,-server,-fuse,-geo-replication} ---人家是这么安装的但是不是suse是centOS wget http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/gluster-epel.repo -O /etc/yum.repo.d/glusterfs.repo
3 4-17这两台服务器互备
glusterfs管理
- $gluster peer probe host|ip
- $gluster peer status #查看除本机外的其他设备状态
- $gluster peer detach host|ip #如果希望将某设备从存储池中删除
在创建volume之前需要先将一组存储设备组成一个存储池,通过存储设备提供的bricks来组成卷。
在设备上启动glusterd之后,可通过设备的主机名或IP地址,将设备加到存储池中。
在4这台机器操作:
gluster peer probe 172.30.5.18
peer probe: failed: Probe returned with Transport endpoint is not connected
报错解决:5.18上gluster启动服务
# gluster peer probe s3
# gluster peer status
Number of Peers: 1
Hostname: s3
Uuid: 0e0230ea-74e3-48b4-a595-81be72a36309
State: Peer in Cluster (Connected)
# cat /etc/hosts
172.30.5.4 s1
172.30.5.17 s2
172.30.5.18 s3
172.30.5.19 s4
1)创建GlusterFS逻辑卷(Volume)
因为4 18是服务器只在一台服务器上操作就行
# gluster volume create gv0 replica 2 172.30.5.4:/data/gluster 172.30.5.18:/data/gluster
报错如下
volume create: gv0: failed: The brick 172.30.5.4:/data/gluster is being created in the root partition. It is recommended that you don‘t use the system‘s root partition for storage backend. Or use ‘force‘ at the end of the command if you want to override this behavior.
发现报错了,这是因为我们创建的brick在系统盘,这个在gluster的默认情况下是不允许的,生产环境下也尽可能的与系统盘分开,如果必须这样请使用force
# gluster volume create gv0 replica 2 172.30.5.4:/data/gluster 172.30.5.18:/data/gluster force
volume create: gv0: success: please start the volume to access data
启用GlusterFS逻辑卷:
# gluster volume start gv0
volume start: gv0: success
查看:
# gluster volume info
客户端挂载17挂载吧
# mkdir /gluster
# mount -t glusterfs 172.30.5.4:/gv0 /gluster
# df -h
172.30.5.4:/gv0 80G 4.1G 76G 6% /gluster
哦shit。。。 客户端跟我预想的不一样阿
在5.4上删除卷吧:
# gluster volume stop gv0
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: gv0: success
# gluster volume delete gv0
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: gv0: success
重新做一遍在5.4上
# gluster volume create gv0 replica 2 172.30.5.4:/home/gluster 172.30.5.18:/home/gluster force
volume create: gv0: success: please start the volume to access data
# gluster volume start gv0
volume start: gv0: success
# gluster volume info
Volume Name: gv0
Type: Replicate
Volume ID: e28cf751-38db-4081-a686-dc218959de97
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 172.30.5.4:/home/gluster
Brick2: 172.30.5.18:/home/gluster
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
-----------------------------------------------------------以上挂载完只有4.5T----------------------------------------
移除
然后把这四台机器都放到一个储存池里
# gluster volume create dr-volume repl 2 s1:/home/data_fluster s2:/home/data_fluster s3:/home/data_fluster s4:/home/data_fluster
volume create: dr-volume: success: please start the volume to access data
# gluster volume info
Volume Name: dr-volume
Type: Distributed-Replicate
Volume ID: 578babc5-bd40-45d7-867b-b21fd970be3f
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: s1:/home/data_fluster
Brick2: s2:/home/data_fluster
Brick3: s3:/home/data_fluster
Brick4: s4:/home/data_fluster
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
route add default gw 172.30.5.1
四台客户端分别挂载:
4 mount -t glusterfs 172.30.5.4:/dr-volume /gluster/
17 mount -t glusterfs 172.30.5.4:/dr-volume /gluster_data
18 mount -t glusterfs 172.30.5.18:/dr-volume /gluster_data
19 mount -t glusterfs 172.30.5.18:/dr-volume /gluster_data
开机自启:chkconfig glusterd on
测试并发
http://blog.csdn.net/qiuhan0314/article/details/39672877
自动挂载:
# cat /etc/fstab
UUID=70af5fe1-a9b4-408e-9b81-6c34048e5a10 swap swap defaults 0 0
UUID=b560683d-0afb-45fe-a86a-359c6c0ae104 / xfs defaults 1 1
UUID=5715b418-7bcb-4a37-8b8c-901769a5b3be /home xfs defaults 1 2
172.30.5.4:/dr-volume /gluster/ glusterfs defaults,_netdev 0 0