rhcs

RHCS

基础配置:

172.25.44.250             物理机[rhel7.2]

172.25.44.1      server1.example.com(node1)[rhel6.5]

172.25.44.2      server2.example.com(node2)[rhel6.5]

172.25.44.3      server3.example.com(luci)[rhel6.5]

1.安装基础所需服务:

node1 node2:

# yum install -y ricci

# /etc/init.d/ricci start                                  开启ricci

# chkconfig ricci on                                     设置开机自启

Luci:

# yum install -y luci

# /etc/init.d/luci start                                   开启luci

# chkconfig luci on                                            设置开机自启

2.利用浏览器访问 luci的8084端口:

3.在客户端进行基础配置:

4.设置外部fence(物理机)

# yum install -yfence-virtd-multicast fence-virtd-libvirt

# systemctl start fence_virtd       开启fence

# mkdir /etc/cluter/

# dd if=/dev/urandomof=/etc/cluter/fence_xvm.key  制作key文件

# fence_virtd -c

Module search path[/usr/lib64/fence-virt]:

Available backends:

libvirt 0.1

Available listeners:

multicast 1.2

Listener modules are responsible foraccepting requests

from fencing clients.

Listener module [multicast]:

The multicast listener module isdesigned for use environments

where the guests and hosts maycommunicate over a network using

multicast.

The multicast address is the addressthat a client will use to

send fencing requests tofence_virtd.

Multicast IP Address [225.0.0.12]:

Using ipv4 as family.

Multicast IP Port [1229]:

Setting a preferred interface causesfence_virtd to listen only

on that interface.  Normally, it listens on all interfaces.

In environments where the virtualmachines are using the host

machine as a gateway, this *must* beset (typically to virbr0).

Set to ‘none‘ for no interface.

Interface [virbr0]:

The key file is the shared keyinformation which is used to

authenticate fencing requests.  The contents of this file must

be distributed to each physical hostand virtual machine within

a cluster.

Key File[/etc/cluster/fence_xvm.key]:

Backend modules are responsible forrouting requests to

the appropriate hypervisor ormanagement layer.

Backend module [libvirt]:

Configuration complete.

=== Begin Configuration ===

fence_virtd {

listener= "multicast";

backend= "libvirt";

module_path= "/usr/lib64/fence-virt";

}

listeners {

multicast{

key_file= "/etc/cluster/fence_xvm.key";

address= "225.0.0.12";

interface= "virbr0";

family= "ipv4";

port= "1229";

}

}

backends {

libvirt{

uri= "qemu:///system";

}

}

=== End Configuration ===

Replace /etc/fence_virt.conf withthe above [y/N]? Y

# scp fence_xvm.key [email protected]:/etc/cluter/ 发送给node1 node2

# scp [email protected]:/etc/cluter/

添加node1 node2 的 uuid进入fence

5.添加apache:

1)配置集群:

2)配置集群所需服务:

【1】VIP

【2】HTTPD

3)建立服务组

客户端(node 1 2 )

yum install -y httpd                            安装测试服务

vim /var/www/html/index.html                    测试文件

『server1.example.com』

『server2.example.com』

6.测试

# /etc/init.d/httpd stop (server1或server2)       关闭服务

观测结果:

访问vip的web界面切换

#echo c >/proc/sysrq-trigger                    挂掉内核

观测结果:

访问vip的web界面切换,且重新开启被夯住的虚拟机

7.服务端(server 3)

添加虚拟内存

### yum install scsi-* -y                                      安装必要服务

###vim /etc/tgt/targets.conf                               修改配置文件

38 <targetiqn.2017-o2.com.example:server.target1>

39    backing-store /dev/vdb

40    initiator-address 172.25.17.10

41    initiator-address 172.25.17.11

42 </target>

###/etc/init.d/tgtd  start                                          开启服务

[[email protected] ~]# tgt-admin -s                          显示所有目标(远程客户端)

Target 1:iqn.2017-o2.com.example:server.target1

System information:

Driver: iscsi

State: ready

I_T nexus information:

LUN information:

LUN: 0

Type: controller

SCSI ID: IET     00010000

SCSI SN: beaf10

Size: 0 MB, Block size: 1

Online: Yes

Removable media: No

Prevent removal: No

Readonly: No

Backing store type: null

Backing store path: None

Backing store flags:

LUN: 1

Type: disk

SCSI ID: IET     00010001

SCSI SN: beaf11

Size: 8590 MB, Block size: 512

Online: Yes

Removable media: No

Prevent removal: No

Readonly: No

Backing store type: rdwr

Backing store path: /dev/vdb

Backing store flags:

Account information:

ACL information:

172.25.17.10

172.25.17.11

客户端(server 1 2)

###yum install -y iscsi-*                                       安装必要服务

iscsiadm -m discovery -t st -p172.25.17.20        客户端远程发现存储

Starting iscsid:                                          [  OK  ]

172.25.17.20:3260,1iqn.2017-o2.com.example:server.target1

###iscsiadm -m node -l                                       列出所有存储节点

Logging in to [iface: default,target: iqn.2017-o2.com.example:server.target1, portal: 172.25.17.20,3260](multiple)

Login to [iface: default, target:iqn.2017-o2.com.example:server.target1, portal: 172.25.17.20,3260] successful.

###[[email protected] ~]# fdisk -l

Disk /dev/vda: 21.5 GB, 21474836480bytes

16 heads, 63 sectors/track, 41610cylinders

Units = cylinders of 1008 * 512 =516096 bytes

Sector size (logical/physical): 512bytes / 512 bytes

I/O size (minimum/optimal): 512bytes / 512 bytes

Disk identifier: 0x0008e924

Device Boot      Start         End      Blocks  Id  System

/dev/vda1   *          3        1018      512000  83  Linux

Partition 1 does not end on cylinderboundary.

/dev/vda2            1018       41611   20458496   8e  Linux LVM

Partition 2 does not end on cylinderboundary.

Disk /dev/mapper/VolGroup-lv_root:19.9 GB, 19906166784 bytes

255 heads, 63 sectors/track, 2420cylinders

Units = cylinders of 16065 * 512 =8225280 bytes

Sector size (logical/physical): 512bytes / 512 bytes

I/O size (minimum/optimal): 512bytes / 512 bytes

Disk identifier: 0x00000000

Disk /dev/mapper/VolGroup-lv_swap:1040 MB, 1040187392 bytes

255 heads, 63 sectors/track, 126cylinders

Units = cylinders of 16065 * 512 =8225280 bytes

Sector size (logical/physical): 512bytes / 512 bytes

I/O size (minimum/optimal): 512bytes / 512 bytes

Disk identifier: 0x00000000

Disk /dev/sda: 8589 MB, 8589934592bytes

64 heads, 32 sectors/track, 8192cylinders

Units = cylinders of 2048 * 512 =1048576 bytes

Sector size (logical/physical): 512bytes / 512 bytes

I/O size (minimum/optimal): 512bytes / 512 bytes

Disk identifier: 0x00000000

*『仅server 1』

###fdisk -cu /dev/sda                                                分区

Device contains neither a valid DOSpartition table, nor Sun, SGI or OSF disklabel

Building a new DOS disklabel withdisk identifier 0x76bc3334.

Changes will remain in memory only,until you decide to write them.

After that, of course, the previouscontent won‘t be recoverable.

Warning: invalid flag 0x0000 ofpartition table 4 will be corrected by w(rite)

Command (m for help): n

Command action

e   extended

p   primary partition (1-4)

p

Partition number (1-4): 1

First sector (2048-16777215, default2048):

Using default value 2048

Last sector, +sectors or+size{K,M,G} (2048-16777215, default 16777215):

Using default value 16777215

Command (m for help): t

Selected partition 1

Hex code (type L to list codes): 8e

Changed system type of partition 1to 8e (Linux LVM)

Command (m for help): p

Disk /dev/sda: 8589 MB, 8589934592bytes

64 heads, 32 sectors/track, 8192cylinders, total 16777216 sectors

Units = sectors of 1 * 512 = 512bytes

Sector size (logical/physical): 512bytes / 512 bytes

I/O size (minimum/optimal): 512bytes / 512 bytes

Disk identifier: 0x76bc3334

Device Boot      Start         End     Blocks   Id  System

/dev/sda1            2048    16777215    8387584   8e  Linux LVM

Command (m for help): w

The partition table has beenaltered!

Calling ioctl() to re-read partitiontable.

Syncing disks.

客户端(server 1 2)

###fdisk -l                                                 查看分区是否一致

Device Boot      Start         End      Blocks  Id  System

/dev/sda1               2        8192    8387584   8e  Linux LVM

[[email protected] ~]# pvcreate /dev/sda1         【server 1】创建pv

Physical volume "/dev/sda1" successfully created

[[email protected] ~]# pvs                                   刷新

PV         VG       Fmt Attr PSize  PFree

/dev/sda1           lvm2 a--   8.00g 8.00g

/dev/vda2  VolGroup lvm2 a--  19.51g   0

[[email protected] ~]# vgcreate clustervg/dev/sda1     【server 1】创建vg

Clustered volume group "clustervg" successfully created

[[email protected] ~]# vgs                                               刷新

VG        #PV #LV #SN Attr   VSize VFree

VolGroup    1   2   0wz--n- 19.51g    0

clustervg   1   0   0wz--nc  8.00g 8.00g

[[email protected] ~]# lvcreate -L 4G -ndemo clustervg     【server 1】创建lv

Logical volume "demo" created

[[email protected] ~]# lvs                                           刷新

LV      VG        Attr       LSize  Pool Origin Data%  Move LogCpy%Sync Convert

lv_root VolGroup  -wi-ao----  18.54g

lv_swap VolGroup  -wi-ao----992.00m

demo    clustervg -wi-a-----   4.00g

[[email protected] ~]# mkfs.ext4/dev/clustervg/demo  【server 1】格式化磁盘

mke2fs 1.41.12 (17-May-2010)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

Stride=0 blocks, Stripe width=0blocks

262144 inodes, 1048576 blocks

52428 blocks (5.00%) reserved forthe super user

First data block=0

Maximum filesystem blocks=1073741824

32 block groups

32768 blocks per group, 32768fragments per group

8192 inodes per group

Superblock backups stored on blocks:

32768,98304, 163840, 229376, 294912, 819200, 884736

Writing inode tables: done

Creating journal (32768 blocks):done

Writing superblocks and filesystemaccounting information: done

### mount /dev/clustervg/demo  /mnt                   挂载【1】

[[email protected] ~]# cd /mnt/

[[email protected] mnt]# vim index.html                         创建测试文件【1】

[[email protected] ~]# umont /mnt/                               卸载【1】

####clusvcadm -d httpd                                                关闭服务【1】

###clusvcadm -e httpd                                                    开启服务【1】

###clusvcadm -r httpd -mserver2.exmple.com             将服务从1移到2

###lvextend -L +2G/dev/clustervg/demo                            扩容磁盘【2】

###resize2fs /dev/clustervg/demo                                  扩容文件系统【2】

###clusvcadm -d httpd                                                    关闭服务【2】

### lvremove /dev/clustervg/demo                                删除lv【2】

###lvcreate -L 2G -n demo clustervg                              重做lv【2】

###mkfs.gfs2 -p lock_dlm -j 3 -thaha:mygfs2 /dev/clustervg/demo    重新格式化,改为gfs2【2】

###mount /dev/clustervg/demo  /mnt                               挂载【1 2】

### vim /mnt/index.html                                                 写测试页面【1 或 2】

###umount /mnt/                                                                 解除挂载【1 2】

### vim /etc/fstab                                                           改为开机自动挂载【1 2】

/dev/clustervg/demo     /var/www/html   gfs2   _netdev 0 0

###mount -a                                                                    刷新挂载【1 2】

###df -h                                                                           查看【1 2】

Filesystem                    Size  Used Avail Use% Mounted on

/dev/mapper/VolGroup-lv_root   19G 1.1G   17G   6% /

tmpfs                         499M   32M 468M   7% /dev/shm

/dev/vda1                     485M   33M 427M   8% /boot

/dev/mapper/clustervg-demo    2.0G 388M  1.7G  19% /var/www/html

###clusvcadm -e httpd                                                    开启服务【2】

###lvextend -L +5G/dev/clustervg/demo                            扩容内存【2】

###gfs2_grow /dev/clustervg/demo                                    扩容系统文件【2】

[[email protected] ~]# gfs2_tool journals/dev/clustervg/demo 查看日志个数【2】

journal2 - 128MB

journal1 - 128MB

journal0 - 128MB

3 journal(s) found.

[[email protected] ~]# gfs2_jadd -j 3/dev/clustervg/demo            添加日志个数【2】

Filesystem:            /var/www/html

Old Journals           3

New Journals           6

[[email protected] ~]# gfs2_tool journals/dev/clustervg/demo 刷新

journal2 - 128MB

journal3 - 128MB

journal1 - 128MB

journal5 - 128MB

journal4 - 128MB

journal0 - 128MB

6 journal(s) found.

[[email protected] ~]# gfs2_tool sb/dev/clustervg/demo table haha:mygfs2  更改名称【1 2】

You shouldn‘t change any of thesevalues if the filesystem is mounted.

Are you sure? [y/n] y

current lock table name ="haha:mygfs2"

new lock table name ="haha:mygfs2"

Done

[[email protected] ~]# gfs2_tool sb/dev/clustervg/demo all 显示块的所有信息【1 2】

mh_magic = 0x01161970

mh_type = 1

mh_format = 100

sb_fs_format = 1801

sb_multihost_format = 1900

sb_bsize = 4096

sb_bsize_shift = 12

no_formal_ino = 2

no_addr = 23

no_formal_ino = 1

no_addr = 22

sb_lockproto = lock_dlm

sb_locktable = haha:mygfs2

uuid = ef017251-3b40-8c81-37ce-523de9cf40e6

时间: 2024-10-07 06:38:50

rhcs的相关文章

RedHat 6.4 搭建rhcs集群

架构拓扑(图片摘自网络) 一.实验环境 os:RedHat 6.4 node5    192.168.2.200    luci管理端 node1    192.168.2.103    集群节点 node2    192.168.2.111    集群节点 虚拟IP     192.168.2.110 初始化操作:关闭所有涉及设备的iptables.selinux=disabled.关闭NetworkManager 注:在该步骤中node2.example.com对应上面环境中提到的node1

RHCS之用css_tool命令创建HA集群及创建gfs2集群文件系统

准备环境 node1:192.168.139.2 node2:192.168.139.4 node4:192.168.139.8 node5:192.168.139.9 node1 作为target端 node2 node4 node5 作为initiator端 并且将node2 node4 node5安装cman+rgmanager后配置成一个三节点的RHCS高可用集群,因为gfs2为一个集群文件系统,必须借助HA高可用集群将故障节点Fence掉,及借助Message Layer进行节点信息传

创建RHCS集群环境

创建RHCS集群环境 1.1 问题 准备四台KVM虚拟机,其三台作为集群节点,一台安装luci并配置iSCSI存储服务,实现如下功能: 使用RHCS创建一个名为tarena的集群 集群中所有节点均需要挂载iSCSI共享存储 使用集群中任意节点对iSCSI设置进行分区格式化 安装luci的虚拟主机要求额外添加一块20G硬盘 物理主机IP地址为192.168.4.1,主机名称为desktop1.example.com 1.2 方案 使用4台虚拟机,1台作为luci和iSCSI服务器.3台作为节点服务

RHCS(一)之原理、搭建

前言 最近在学习集群高可用,集群的高可用可以让平台架构实现服务在线时间接近7X24X365.实现高可用技术有Heartbeat.Keepalive.Corosync等等,我们这里介绍的是RedHat Cluster Suite (RHCS).本实验通过cman+rgmanager+system-config-cluster+gfs2+iscsi+clvm+qdisk来实现前端高可用web服务. 原理 其实高可用技术无非就是实现了这三层的功能:最低层的信息交换层.中间的集群资源管理层.上层的资源管

RHCS(四)之gfs2和clvm

六.测试clvm和gfs2文件系统 恢复所有节点都是online在集群中 在web1对共享存储中的/dev/sdb2创建lv pvcreate /dev/sdb2 vgcreate rhcsvg /dev/sdb2 lvcreate -L 1G -n lv1rhcsvg lvcreate -L 128M -n lv2 rhcsvg 在web4查看自己有没有/dev/rhcsvg/lv* 没有... 对web所有节点开启clvmd服务 [[email protected] ~]# for i in

在RHEL5.8下使用RHCS实现Web HA

记录使用Red Hat Cluster Suite实现高可用集群,利用web服务实验. 实现RHCS的集群,有三个前提条件: ①每个集群都有惟一集群名称:②至少有一个fence设备:③至少应该有三个节点:两个节点的场景中要使用qdisk (仲裁磁盘): 实验环境介绍: 集群主机:192.168.1.6 test1: 192.168.1.7 test2 ;   192.168.1.8 test3 共享存储(NFS):192.168.1.170 配置一个Web 的HA,资源主要有vip,共享存储,以

CentOS 6.6 上使用 luci/ricci 安装配置 RHCS 集群

1.配置 RHCS 集群的前提: 时间同步 名称解析,这里使用修改/etc/hosts 文件 配置好 yum 源,CentOS 6 的默认的就行 关闭防火墙(或者开放集群所需通信端口),和selinux, 关闭 NetworkManager 服务 2. RHCS 所需要的主要软件包为 cman 和 rgmanager cman: 是集群基础信息层,在 CentOS 6中依赖 corosync rgmanager: 是集群资源管理器, 类似于pacemaker 的功能 luci: 提供了管理 rh

RHCS集群 服务不能正常启动 解决方法

对于初次搭建 RHCS 集群 总是遇到 很多 意想不到的 trouble. 用 luci  管理 集群时 : 在  搭建  server group  服务  ,服务出现 disable  那是 常有的时.. 下面给出  我在 练习中 解决 方法.. 1.无 法 在 Fence 或 者 重 启 后 重 新 加 入 集 群 的 节 点:  重启 rgm anager 捕获应用程序 core 前,请移动或删除 / 目录中的所有旧 core 文件.应重启出现rgm anager 崩溃的集群节点,或者在

Redhat 6配置RHCS实现双机HA群集

最近测试了RedHat 6.5上RHCS,搭建了一个双机HA群集,在此将配置过程和测试过程分享给大家,主要包括节点配置.群集管理服务器配置.群集创建与配置.群集测试等内容. 一.测试环境 计算机名 操作系统 IP地址 群集IP 安装的软件包 HAmanager RedHat  6.5 192.168.10.150      - luci.iscsi target(用于仲裁盘) node1 RedHat  6.5 192.168.10.104 192.168.10.103 High Availab