34补3-4 rhcs之gfs2和clvm

04 rhcs之gfs2和clvm

使用共享存储创建高可用集群

[[email protected] ~]# yum -y install scsi-target-utils

[[email protected] ~]# vim /etc/tgt/targets.conf 

在末尾添加

<target iqn.2015-01.com.magedu:node4.t1>

backing-store /dev/sda4

initiator-address 192.168.1.0/24

</target>

[[email protected] ~]# fdisk /dev/sda #创建sda4,并分配50Gp分区

[[email protected] ~]# partx -a /dev/sda

[[email protected] ~]# service tgtd start

[[email protected] ~]# tgtadm -L iscsi -m target -o show

#安装软件

[[email protected] ~]# yum -y install iscsi-initiator-utils;echo "InitiatorName=`iscsi-iname -p iqn.2015-01.com.magedu`" >/etc/iscsi/initiatorname.iscsi

[[email protected] ~]# yum -y install iscsi-initiator-utils;echo "InitiatorName=`iscsi-iname -p iqn.2015-01.com.magedu`" >/etc/iscsi/initiatorname.iscsi

[[email protected] ~]# yum -y install iscsi-initiator-utils;echo "InitiatorName=`iscsi-iname -p iqn.2015-01.com.magedu`" >/etc/iscsi/initiatorname.iscsi

#启动服务

[[email protected] ~]# service iscsi start ;service iscsid start

[[email protected] ~]# service iscsi start ;service iscsid start

[[email protected] ~]# service iscsi start ;service iscsid start

#发现存储设备

[[email protected] ~]# iscsiadm -m discovery -t st -p 192.168.1.154

[[email protected] ~]# iscsiadm -m discovery -t st -p 192.168.1.154

[[email protected] ~]# iscsiadm -m discovery -t st -p 192.168.1.154

#登录

[[email protected] ~]# iscsiadm -m node -T iqn.2015-01.com.magedu:node4.t1 -p 192.168.1.154 -l

[[email protected] ~]# iscsiadm -m node -T iqn.2015-01.com.magedu:node4.t1 -p 192.168.1.154 -l

[[email protected] ~]# iscsiadm -m node -T iqn.2015-01.com.magedu:node4.t1 -p 192.168.1.154 -l

#安装GFS2

[[email protected] ~]# yum -y install gfs2-utils

[[email protected] ~]# yum -y install gfs2-utils

[[email protected] ~]# yum -y install gfs2-utils

装载gfs2

[[email protected] ~]# modprobe gfs2

[[email protected] ~]# lsmod | grep gfs2

gfs2                  548432  0 

dlm                   148231  22 gfs2

[[email protected] ~]# fdisk /dev/sdb #创建两个20G的分区

[[email protected] ~]# fdisk -l /dev/sdb

Disk /dev/sdb: 53.7 GB, 53691549696 bytes

64 heads, 32 sectors/track, 51204 cylinders

Units = cylinders of 2048 * 512 = 1048576 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0xabd7f733

  Device Boot      Start         End      Blocks   Id  System

/dev/sdb1               1       20481    20972528   83  Linux

/dev/sdb2           20482       40962    20972544   83  Linux

#创建gfs2集群文件系统

[[email protected] ~]# mkfs.gfs2 -j 2 -p lock_dlm -t tcluster:sdb1 /dev/sdb1

This will destroy any data on /dev/sdb1.

It appears to contain: data

Are you sure you want to proceed? [y/n] y

Device:                    /dev/sdb1

Blocksize:                 4096

Device Size                20.00 GB (5243132 blocks)

Filesystem Size:           20.00 GB (5243131 blocks)

Journals:                  2

Resource Groups:           81

Locking Protocol:          "lock_dlm"

Lock Table:                "tcluster:sdb1"

UUID:                      aebcc094-7b50-3df9-da3c-537894310e47

[[email protected] ~]# tunegfs2 /dev/sdb1

[[email protected] ~]# tunegfs2 -l /dev/sdb1

tunegfs2 (May 11 2016 09:59:26)

Filesystem volume name: tcluster:sdb1

Filesystem UUID: aebcc094-7b50-3df9-da3c-537894310e47

Filesystem magic number: 0x1161970

Block size: 4096

Block shift: 12

Root inode: 22

Master inode: 23

Lock Protocol: lock_dlm

Lock table: tcluster:sdb1

#node1节点挂载gfs2分区

[[email protected] ~]# mkdir -p /cluster/data

[[email protected] ~]# mount -t gfs2 /dev/sdb1 /cluster/data/

[[email protected] ~]# mount

/dev/sda2 on / type ext4 (rw)

proc on /proc type proc (rw)

sysfs on /sys type sysfs (rw)

devpts on /dev/pts type devpts (rw,gid=5,mode=620)

tmpfs on /dev/shm type tmpfs (rw)

/dev/sda1 on /boot type ext4 (rw)

none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

none on /sys/kernel/config type configfs (rw)

/dev/sdb1 on /cluster/data type gfs2 (rw,relatime,hostdata=jid=0)

#显示日志区域

[[email protected] ~]# gfs2_tool journals /dev/sdb1

journal1 - 128MB

journal0 - 128MB

2 journal(s) found.

#node2节点挂载gfs2分区

[[email protected] ~]# mkdir -p /cluster/data

[[email protected] ~]# partx -a /dev/sdb

[[email protected] ~]# mount -t gfs2 /dev/sdb1 /cluster/data/

[[email protected] ~]# mount

/dev/sda2 on / type ext4 (rw)

proc on /proc type proc (rw)

sysfs on /sys type sysfs (rw)

devpts on /dev/pts type devpts (rw,gid=5,mode=620)

tmpfs on /dev/shm type tmpfs (rw)

/dev/sda1 on /boot type ext4 (rw)

none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

none on /sys/kernel/config type configfs (rw)

/dev/sdb1 on /cluster/data type gfs2 (rw,relatime,hostdata=jid=1)

#测试1:在node2结点中复制文件

[[email protected] ~]# cd /cluster/data/

[[email protected] data]# cp /etc/fstab .

[[email protected] ~]# cd /cluster/data/

[[email protected] data]# ls

fstab

结果:node2中gfs2分区中添加的文件在node1中可以同步查看

#测试2:在node1结点中删除node2结点中复制文件的内容

[[email protected] data]# vim fstab

删除最后四行

在node2结点查看该文件时发现其最后四行内容已被删除

#node3节点挂载gfs2分区

[[email protected] ~]# mkdir -p /cluster/data

[[email protected] ~]# partx -a /dev/sdb 

[[email protected] ~]# mount -t gfs2 /dev/sdb1 /cluster/data/

Too many nodes mounting filesystem, no free journals

#发现挂载点不够无法挂载

#解决办法,在其他已经挂载的结点上执行

[[email protected] data]# gfs2_jadd -j 1 /dev/sdb1

Filesystem:            /cluster/data

Old Journals           2

New Journals           3

#重新在node3结点上挂载

[[email protected] ~]# mount -t gfs2 /dev/sdb1 /cluster/data/

[[email protected] ~]# mount

/dev/sda2 on / type ext4 (rw)

proc on /proc type proc (rw)

sysfs on /sys type sysfs (rw)

devpts on /dev/pts type devpts (rw,gid=5,mode=620)

tmpfs on /dev/shm type tmpfs (rw)

/dev/sda1 on /boot type ext4 (rw)

none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

none on /sys/kernel/config type configfs (rw)

/dev/sdb1 on /cluster/data type gfs2 (rw,relatime,hostdata=jid=2)

#挂载成功

#冻结GFS2结点

[[email protected] ~]# gfs2_tool freeze /cluster/data/

[[email protected] ~]# mount

/dev/sda2 on / type ext4 (rw)

proc on /proc type proc (rw)

sysfs on /sys type sysfs (rw)

devpts on /dev/pts type devpts (rw,gid=5,mode=620)

tmpfs on /dev/shm type tmpfs (rw)

/dev/sda1 on /boot type ext4 (rw)

none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

none on /sys/kernel/config type configfs (rw)

/dev/sdb1 on /cluster/data type gfs2 (rw,relatime,hostdata=jid=1)

#结点被冻结后,该结点可以读数据,但无法向其写数据

#解冻被冻结的结点

[[email protected] ~]# gfs2_tool unfreeze /cluster/data/

[[email protected] data]# yum -y install lvm2-cluster

[[email protected] data]# yum -y install lvm2-cluster

[[email protected] data]# yum -y install lvm2-cluster

#启用逻辑卷的集群功能

[[email protected] data]# lvmconf --enable-cluster

[[email protected] data]# lvmconf --enable-cluster

[[email protected] data]# lvmconf --enable-cluster

#启动集群逻辑卷服务

[[email protected] ~]# service clvmd start

[[email protected] ~]# service clvmd start

[[email protected] ~]# service clvmd start

[[email protected] data]# fdisk /dev/sdb

WARNING: DOS-compatible mode is deprecated. It‘s strongly recommended to

switch off the mode (command ‘c‘) and change display units to

sectors (command ‘u‘).

Command (m for help): t

Partition number (1-4): 2

Hex code (type L to list codes): 8e

Changed system type of partition 2 to 8e (Linux LVM)

Command (m for help): w

The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.

The kernel still uses the old table. The new table will be used at

the next reboot or after you run partprobe(8) or kpartx(8)

Syncing disks.

[[email protected] data]# partx -a /dev/sdb

[[email protected] data]# pvcreate /dev/sdb2

[[email protected] data]# vgcreate cvg /dev/sdb2

[[email protected] data]# lvcreate -L 10G -n clv cvg

[[email protected] data]# mkfs.gfs2 -j 3 -t tcluster:clv -p lock_dlm /dev/cvg/clv

This will destroy any data on /dev/cvg/clv.

It appears to contain: symbolic link to `../dm-0‘

Are you sure you want to proceed? [y/n] y

Device:                    /dev/cvg/clv

Blocksize:                 4096

Device Size                10.00 GB (2621440 blocks)

Filesystem Size:           10.00 GB (2621438 blocks)

Journals:                  3

Resource Groups:           40

Locking Protocol:          "lock_dlm"

Lock Table:                "tcluster:clv"

UUID:                      1c42a8f1-5d14-5982-891f-3ce0faaa2123

[[email protected] ~]# mount -t gfs2 /dev/cvg/clv /mnt/

[[email protected] ~]# mount

/dev/sda2 on / type ext4 (rw)

proc on /proc type proc (rw)

sysfs on /sys type sysfs (rw)

devpts on /dev/pts type devpts (rw,gid=5,mode=620)

tmpfs on /dev/shm type tmpfs (rw)

/dev/sda1 on /boot type ext4 (rw)

none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

none on /sys/kernel/config type configfs (rw)

/dev/sdb1 on /cluster/data type gfs2 (rw,relatime,hostdata=jid=2)

/dev/mapper/cvg-clv on /mnt type gfs2 (rw,relatime,hostdata=jid=0)

#扩展逻辑卷

#1)物理扩展

[[email protected] ~]# lvextend -L +5G /dev/cvg/clv 

#2)逻辑扩展

[[email protected] ~]# gfs2_grow  /dev/cvg/clv

时间: 2024-08-13 14:35:20

34补3-4 rhcs之gfs2和clvm的相关文章

RHCS+GFS2+ISCSI+CLVM实现共享存储

RHCS+GFS2+ISCSI+CLVM实现共享存储                            2015-03-25 16:35:29 标签:iscsi scsi rhcs clvm gfs2 本文转载修改自http://www.it165.net/admin/html/201404/2654.html 一,GFS2简介 GFS2 是一个基于GFS的先进的集群文件系统,能够同步每台主机的集群文件系统的metadata,能够进行文件锁的管理,并且必须要redhat cluster su

34补3-3 rhcs集群基础应用

03 rhcs集群基础应用 配置luci/ricci(图形界面,重点掌握) 配置环境 node1:192.168.1.151 CentOS6.5 node2:192.168.1.152 CentOS6.5 node3:192.168.1.153 CentOS6.5 node3:192.168.1.154 CentOS6.5 [[email protected] ~]# ansible ha -m shell -a 'service NetworkManager stop' [[email pro

RHCS(四)之gfs2和clvm

六.测试clvm和gfs2文件系统 恢复所有节点都是online在集群中 在web1对共享存储中的/dev/sdb2创建lv pvcreate /dev/sdb2 vgcreate rhcsvg /dev/sdb2 lvcreate -L 1G -n lv1rhcsvg lvcreate -L 128M -n lv2 rhcsvg 在web4查看自己有没有/dev/rhcsvg/lv* 没有... 对web所有节点开启clvmd服务 [[email protected] ~]# for i in

RHCS +GFS2+iscsi+cLVM实现高可用web集群

RHEL6.6-x86-64 软件源: epel源 本地yum源 RHCS安装及配置 192.168.1.5   安装luci      两块硬盘, 其中/sdb提供共享存储. 集群节点 192.168.1.6  安装ricci     node1.mingxiao.info     node1 192.168.1.7  安装ricci     node2.mingxiao.info     node2 192.168.1.8 安装ricci      node3.mingxiao.info  

34补2 HA Cluster与Corosync、pacemaker、drbd

HA Cluster基础及heartbeat实现HA 配置环境 node1:192.168.1.121 CentOS6.7 node2:192.168.1.122 CentOS6.7 node3:192.168.1.123 CentOS6.7 vip 192.168.1.88 配置前准备    # cat /etc/hosts 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1  

34补-2 HA Cluster基础及heartbeat实现HA

HA Cluster基础及heartbeat实现HA 配置环境 node1:192.168.1.121 CentOS6.7 node2:192.168.1.122 CentOS6.7 node3:192.168.1.123 CentOS6.7 vip 192.168.1.80 配置前准备    # cat /etc/hosts 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1  

34补1-4 实现高可用mysql集群

HA Cluster基础及heartbeat实现HA 配置环境 node1:192.168.1.121 CentOS6.7 node2:192.168.1.122 CentOS6.7 node3:192.168.1.123 CentOS6.7 vip 192.168.1.88 配置前准备    # cat /etc/hosts 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1  

34补3-2 Linux系统上IP SAN的实现

02Linux系统上IP SAN的实现 配置环境 node1:192.168.1.151CentOS6.5 node2:192.168.1.152CentOS6.5 node3:192.168.1.153CentOS6.5#服务端:node3添加两块20G磁盘 配置iSCSI target(服务端) 1.准备硬盘 在node3上添加两块20磁盘 2.安装程序包,启动服务 [[email protected] ~]# yum -y install scsi-target-utils [[email

linux从小白到linux资深专家之路

为什么学Linux,理由如下:    Linux是免费:    Linux是开源的,你可以修改源代码:    Linux是开放的,有广泛的社区:    学习Linux可以更好的掌握计算机技术:    Linux是未来发展的趋势:    Linux有非常多的发行版本,你可以根据需要做出不同的选择:    Linux让你多了一个选择:    Linux是一种自由哲学,一种开源的信仰:    学好linux,可以装逼,最重要一点,技多不压身. 学习linux不要一下子就上来学架构之类的,一定要打好基础