Rhel6-pacemaker+drbd配置文档

系统环境: rhel6 x86_64 iptables and selinux disabled

主机: 192.168.122.119 server19.example.com

192.168.122.25 server25.example.com(注:时间需同步)

192.168.122.1 desktop36.example.com

所需的包:drbd-8.4.3.tar.gz

yum仓库配置:

[rhel-source]

name=Red
Hat Enterprise Linux $releasever - $basearch - Source

baseurl=ftp://192.168.122.1/pub/yum

enabled=1

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[HighAvailability]

name=Instructor
Server Repository

baseurl=ftp://192.168.122.1/pub/yum/HighAvailability

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

enabled=1

[LoadBalancer]

name=Instructor
Server Repository

baseurl=ftp://192.168.122.1/pub/yum/LoadBalancer

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

enabled=1

[ResilientStorage]

name=Instructor
Server Repository

baseurl=ftp://192.168.122.1/pub/yum/ResilientStorage

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

enabled=1

[ScalableFileSystem]

name=Instructor
Server Repository

baseurl=ftp://192.168.122.1/pub/yum/ScalableFileSystem

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

enabled=1

#配置pacemaker

以下步骤在server19server25上实施:

[[email protected]
~]# yum install corosync pacemaker -y

以下步骤在server19server25上实施:

[[email protected]
~]# cd /etc/corosync/

[[email protected]
corosync]# corosync-keygen (生成该key需要不断的敲打键盘)

[[email protected]
corosync]# cp corosync.conf.example corosync.conf

[[email protected]
corosync]# vim corosync.conf

#
Please read the corosync.conf.5 manual page

compatibility:
whitetank

totem
{

version: 2

secauth: off

threads: 0

interface {

ringnumber: 0

bindnetaddr: 192.168.122.0

mcastaddr: 226.94.1.1

mcastport: 5405

ttl: 1

}

}

logging
{

fileline: off

to_stderr: yes

to_logfile: yes

to_syslog: yes

logfile: /var/log/cluster/corosync.log

debug: off

timestamp: on

logger_subsys {

subsys: AMF

debug: off

}

}

amf
{

mode: disabled

}

service
{

ver: 0

name: pacemaker

use_mgmtd: yes

}

[[email protected]
corosync]# scp corosync.conf authkey
[email protected]:/etc/corosync/

以下步骤在server19server25上实施:

[[email protected]
corosync]# /etc/init.d/corosync start

此时查看日志tail
-f /var/log/cluster/corosync.log
会有如下错误:

Jul
27 02:31:31 [1461] server19.example.com pengine: notice:
process_pe_message: Configuration ERRORs found during PE processing.
Please run "crm_verify -L" to identify issues.

解决方法如下:

[[email protected]
corosync]# crm(注:redhat6.4后crm这个命令没有集成在pacemaker包中,需要另外安装crmsh)

crm(live)#
configure

crm(live)configure#
property stonith-enabled=false

crm(live)configure#
commit

crm(live)configure#
quit

[[email protected]
corosync]# crm_verify -L(检测配置是否有错误)

此时执行crm_mon进入监控页面,若两台主机均处于Online状态说明配置成功.

以下配置只需在任意一台机子上实施,所有配置会自动同步到另一台机子上.

#添加虚拟IP

[[email protected] corosync]# crm

crm(live)# configure

crm(live)configure# primitive vip ocf:heartbeat:IPaddr2 params ip=192.168.122.178 cidr_netmask=32 op monitor interval=30s

crm(live)configure# commit

crm(live)configure# quit

#忽略法定人数的检查

[[email protected] corosync]# crm

crm(live)# configure

crm(live)configure# property no-quorum-policy=ignore

crm(live)configure# commit

crm(live)configure# quit

#
添加apache服务

1.以下步骤在server19server25上实施:

[[email protected]
corosync]# vim /etc/httpd/conf/httpd.conf

<Location
/server-status>

SetHandler server-status

Order deny,allow

Deny from all

Allow from 127.0.0.1

</Location>

[[email protected]
corosync]# echo `hostname` > /var/www/html/index.html

2.以下步骤在server19server25上实施:

[[email protected]
corosync]# crm

crm(live)#
configure

crm(live)configure#
primitive apache ocf:heartbeat:apache params
configfile=/etc/httpd/conf/httpd.conf op monitor interval=1min

crm(live)configure#
commit

qcrm(live)configure#
quit

此时执行crm_mon有可能会发现vipapache运行在不同的机子上:

解决方法(vipapache绑定):

[[email protected] corosync]# crm

crm(live)# configure

crm(live)configure# colocation apache-with-vip inf: apache vip

crm(live)configure# commit

crm(live)configure# quit

此时访问192.168.122.178可访问到server19上的页面

#配置主备

[[email protected]
corosync]# crm

crm(live)#
configure

crm(live)configure#
location master-node apache 10: server19.example.com

crm(live)configure#
commit

crm(live)configure#
quit

#配置fence

以下步骤在desktop36上实施:

[[email protected]
~]# yum list fence*

[[email protected]
~]# yum install fence-virtd.x86_64 fence-virtd-libvirt.x86_64
fence-virtd-multicast.x86_64 fence-virt-0.2.3-9.el6.x86_64 -y

[[email protected]
~]# fence_virtd -c

Module
search path [/usr/lib64/fence-virt]:

Available
backends:

libvirt 0.1

Available
listeners:

multicast 1.1

Listener
modules are responsible for accepting requests

from
fencing clients.

Listener
module [multicast]:

The
multicast listener module is designed for use environments

where
the guests and hosts may communicate over a network using

multicast.

The
multicast address is the address that a client will use to

send
fencing requests to fence_virtd.

Multicast
IP Address [225.0.0.12]:

Using
ipv4 as family.

Multicast
IP Port [1229]:

Setting
a preferred interface causes fence_virtd to listen only

on
that interface. Normally, it listens on the default network

interface.
In environments where the virtual machines are

using
the host machine as a gateway, this *must* be set

(typically
to virbr0).

Set
to ‘none‘ for no interface.

Interface
[none]: virbr0

The
key file is the shared key information which is used to

authenticate
fencing requests. The contents of this file must

be
distributed to each physical host and virtual machine within

a
cluster.

Key
File [/etc/cluster/fence_xvm.key]:

Backend
modules are responsible for routing requests to

the
appropriate hypervisor or management layer.

Backend
module [checkpoint]: libvirt

The
libvirt backend module is designed for single desktops or

servers.
Do not use in environments where virtual machines

may
be migrated between hosts.

Libvirt
URI [qemu:///system]:

Configuration
complete.

===
Begin Configuration ===

backends
{

libvirt
{

uri
= "qemu:///system";

}

}

listeners
{

multicast
{

interface
= "virbr0";

port
= "1229";

family
= "ipv4";

address
= "225.0.0.12";

key_file
= "/etc/cluster/fence_xvm.key";

}

}

fence_virtd
{

module_path
= "/usr/lib64/fence-virt";

backend
= "libvirt";

listener
= "multicast";

}

===
End Configuration ===

Replace
/etc/fence_virt.conf with the above [y/N]? y

注:以上设置除“Interface”处填写虚拟机通信接口和Backend
module填写libvirt外,其他选项均可回车保持默认。

[[email protected]
~]# mkdir /etc/cluster

[[email protected]
~]# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=128 count=1

以下步骤在server19server25上实施:

[[email protected]
corosync]# mkdir /etc/cluster

[[email protected]
corosync]# yum install fence-virt-0.2.3-9.el6.x86_64 -y

以下步骤在desktop36上实施:

[[email protected]
~]# scp /etc/cluster/fence_xvm.key [email protected]:/etc/cluster/

[[email protected]
~]# scp /etc/cluster/fence_xvm.key [email protected]:/etc/cluster/

[[email protected]
~]# /etc/init.d/fence_virtd start

[[email protected]
~]# netstat -anuple | grep fence

udp
0 0 0.0.0.0:1229 0.0.0.0:*
0 823705 6320/fence_virtd

注:可查看到1229端口说明fence_virtd启动成功.

以下步骤在server19server25上实施:

[[email protected]
corosync]# crm

crm(live)#
configure

crm(live)configure#
cib new stonith

crm(stonith)configure#
quit

[[email protected]
corosync]# crm

crm(live)#
configure

crm(live)configure#
primitive vmfence stonith:fence_xvm params
pcmk_host_map="server19.example.com:vm1
server25.example.com:vm2" op monitor interval=30s

crm(live)configure#
property stonith-enabled=true

crm(live)configure#
commit

crm(live)configure#
quit

测试:将server19断网或执行echo
c > /proc/sysrq-trigger模拟内核崩溃,看服务是否被接管,并且server19断电重启.

#配置drbd

分别给server19server25加一块相同大小的虚拟硬盘

以下步骤在server19server25上实施:

[[email protected] kernel]# yum install kernel-devel make -y

[[email protected] kernel]# tar zxf drbd-8.4.3.tar.gz

[[email protected] kernel]# cd drbd-8.4.3

[[email protected] drbd-8.4.3]# ./configure --enable-spec –with-km

此时会出现如下问题:

(1)configure: error: no acceptable C compiler found in $PATH

(2)configure: error: Cannot build utils without flex, either install flex or pass the --without-utils option.

(3)configure: WARNING: No rpmbuild found, building RPM packages is disabled.

(4)configure: WARNING: Cannot build man pages without xsltproc. You may safely ignore this warning when building from a tarball.

(5)configure: WARNING: Cannot update buildtag without git. You may safely ignore this warning when building from a tarball.

解决方法如下:

(1)[[email protected]
drbd-8.4.3]# yum install gcc -y

(2)[[email protected]
drbd-8.4.3]# yum install flex -y

(3)[[email protected]
drbd-8.4.3]# yum install rpm-build -y

(4)[[email protected]
drbd-8.4.3]# yum install libxslt -y

(5)[[email protected]
drbd-8.4.3]# yum install git -y

[[email protected]
kernel]# mkdir -p ~/rpmbuild/SOURCES

[[email protected]
kernel]# cp drbd-8.4.3.tar.gz ~/rpmbuild/SOURCES/

[[email protected]
drbd-8.4.3]# rpmbuild -bb drbd.spec

[[email protected]
drbd-8.4.3]# rpmbuild -bb drbd-km.spec

[[email protected]
drbd-8.4.3]# cd ~/rpmbuild/RPMS/x86_64/

[[email protected]
x86_64]# rpm -ivh *

[[email protected]
x86_64]# scp ~/rpmbuild/RPMS/x86_64/*
[email protected]:/root/kernel/

以下步骤在server25上实施:

[[email protected]
kernel]# rpm -ivh *

以下步骤在server19server25上实施:

[[email protected]
~]# fdisk -cu /dev/vda

划分分区(一般只划一个分区),类型为Linux
LVM的

[[email protected]
~]# pvcreate /dev/vda1

[[email protected]
~]# vgcreate koenvg /dev/vda1

[[email protected]
~]# lvcreate -L 1G -n koenlv koenvg

以下步骤在server19server25上实施:

[[email protected]
drbd.d]# cd /etc/drbd.d/

[[email protected]
drbd.d]# vim drbd.res

resource
koen
{

meta-disk
internal;

device
/dev/drbd1;

syncer
{

verify-alg
sha1;

}

net
{

allow-two-primaries;

}

on
server19.example.com
{

disk

/dev/mapper/koenvg-koenlv;

address
192.168.122.119:7789;

}

on
server25.example.com
{

disk

/dev/mapper/koenvg-koenlv;

address
192.168.122.25:7789;

}

}

[[email protected]
drbd.d]# scp drbd.res [email protected]:/etc/drbd.d/

以下步骤在server19server25上实施:

[[email protected]
drbd.d]# drbdadm create-md koen

[[email protected]
drbd.d]# /etc/init.d/drbd start

以下步骤在server19上实施:

[[email protected]
drbd.d]# drbdsetup /dev/drbd1 primary –force

(此条命令将server19设置成primary节点,并同步数据)

此时可以执行watch
cat /proc/drbd 查看同步状态,当同步完成后继续往下配置,创建文件系统.

[[email protected]
drbd.d]# mkfs.ext4 /dev/drbd1

[[email protected]
drbd.d]# mount /dev/drbd1 /var/www/html/

注意:两台主机上的/dev/drbd1
不能同时挂载,只有状态为
primary
,才能被挂载使
,而此时另一方的状态为
secondary

测试:在server19上将/dev/drbd1挂在到/var/www/html/
,进到/var/www/html/中随意编辑一些文件,然后卸载/dev/drbd1(umount
/var/www/html/)
,执行drbdadm
secondary koen
drbdadm
primary
koen
server25设置为主节点,在server25上挂载/dev/drbd1,最后查看/var/www/html/下的内容是否同步,

:拉伸设备

以下步骤在server19server25上实施:

[[email protected]
~]# lvextend -L +1000M /dev/mapper/koenvg-koenlv

以下步骤在server19server25上实施:

[[email protected]
~]# drbdadm resize koen

以下步骤在primary节点上实施:

[[email protected]
~]# mount /dev/drbd1 /var/www/html/

[[email protected]
~]# resize2fs /dev/drbd1

#pacemakerdrbd整合

以下步骤在server19server25上实施:

[[email protected]
~]# crm

crm(live)#
configure

crm(live)configure#
primitive webdata ocf:linbit:drbd params drbd_resource=koen op
monitor interval=60s

crm(live)configure#
ms webdataclone webdata meta master-max=1 master-node-max=1
clone-max=2 clone-node-max=1 notify=true

crm(live)configure#
primitive webfs ocf:heartbeat:Filesystem params device="/dev/drbd1"
directory="/var/www/html" fstype=ext4

crm(live)configure#
group webgroup vip apache webfs

crm(live)configure#
colocation apache-on-webdata inf: webgroup webdataclone:Master

crm(live)configure#
order apache-after-webdata inf: webdataclone:promote webgroup:start

crm(live)configure#
commit

crm(live)configure#
quit

附:使用iscsi存储

以下步骤在desktop36上实施:

[root@desktop36 ~]# yum install scsi-target-utils.x86_64 -y

[root@desktop36 ~]# vim /etc/tgt/targets.conf

<target iqn.2013-07.com.example:server.target1>

backing-store /dev/vg_desktop36/iscsi-test

initiator-address 192.168.122.112

initiator-address 192.168.122.234

</target>

[root@desktop36 ~]# /etc/init.d/tgtd start

以下步骤在server19server25上实施:

[root@server19
~]# iscsiadm -m discovery -t st -p 192.168.122.1

[root@server19
~]# iscsiadm -m node -l

使用fdisk
-cu对iscsi设备进行分区并且格式化.

注:此操作只需要在一个节点上进行即可,另一个节点会自动同步.

以下步骤在server19server25上实施:

[root@server19
~]# crm

crm(live)#
configure

crm(live)configure#
primitive iscsi ocf:heartbeat:Filesystem params device=/dev/sda1
directory=/var/www/html fstype=ext4 op monitor
interval=30s

crm(live)configure#
colocation apache-with-iscsi inf: apache iscsi

crm(live)configure#
commit

crm(live)configure#
quit

时间: 2024-10-30 03:58:11

Rhel6-pacemaker+drbd配置文档的相关文章

Hadoop配置文档

预节 在这一节中,笔者主要向大家介绍了该配置文档中,所用到的Linux命令和Linux的帮助. 终端提示信息 在Linux中,终端的每一行都有提示信息,其包含了当前终端登录的用户,当前登录的主机,当前终端所在的目录. 如:[[email protected] ~]$其格式为:[[用户名]@[hosts主机名或主机ip [当前所在路径]]$解析后可以知道,例子给的提示,实际上代表的是:当前终端登录的主机为master,所有的操作都是针对master的,登录主机的用户为frank,当前终端cd命令进

Nginx配置文档具体解释

Nginx的配置文档具体解释.在这儿做个总结,以便以后使用的时间查看. 下面大部分自己整理.部分来自參考 #设置用户 #user  nobody; #启动进程数(一般和server的CPU同样) #能够使用 $ cat /proc/cpuinfo 查看内核数 worker_processes  2; #设置错误文件存放的路径 #error_log  logs/error.log; #error_log  logs/error.log  notice; #error_log  logs/error

IIS配置文档

IIS配置文档: 1.安装IIS.控制面板→程序→打开关闭Windows功能,Web管理服务和万维网服务都勾上. 2.部署网站:ASP.Net项目的发布:项目中点右键“发布”,选择“文件系统”,发布到一个文件夹下. 3.在IIS中新建网站,设定域名,这样多个域名可以放到一个IIS服务器上.需要绑定域名. 4.模拟域名,如果启用了UAC,则用管理员权限运行记事本,打开 C:\Windows\System32\drivers\etc下的hosts文件 做一下域名协议的欺骗.伪造一些域名出来. 5.如

OpenCV+VS2013+Win8+64位配置文档

配置环境 编程平台:VS2013 系统:Windows8 64位 X64架构 ? 安装OpenCV 1 下载OpenCV-2.4.9,下载网址 http://opencv.org/ 2 解压OpenCV到 D:\Program Files\OpenCV\opencv249 3 配置系统变量 添加OpenCV变量: D:\Program Files\OpenCV\opencv249\build 在Path后添加: D:\Program Files\OpenCV\opencv249\build\x6

【VMware虚拟化解决方案】VMware Horizon View Client 各平台配置文档

云桌面用户手册 XXXX部 2014年05月18日 文档版本 文档名称 XXXX公司云桌面用户手册 保密级别 商密 文档版本编号 1.0 制作人 制作日期 2014-04-24 复审人 复审日期 扩散范围 公司内部使用人员 变更记录 版本编号 版本日期 修改者 说明 文档说明 此文档为XXXX公司内部员工关于<云桌面用户手册>培训文档. 此文档只对公司内部员工传阅,并只针对公司内部员工问题给予解决. 目录 1.VMware Horizon View Client下载地址... 4 2.桌面连接

微信开发配置文档

微信开发配置文档 请求SDK接口 1, 绑定域名 公众号设置 菜单 --> 功能设置 绑定域名 : http://www.xxxxx.com2, 记录应用ID : AppID(应用ID)wxd9c94eba232190a1 应用密码: 8ff9f1fd268bdb643fc27354811d973a3, 找到开发者工具 --> 开发者文档 -->微信网页开发 -->JS-SDK说明文档 页面底部 下载 sample.php 示例DEMO 打开sample.php 设置 APPID

maven工程web层的web.xml配置文档内容

下面是web层,web.xml配置文档里面需要配置的东西: 1.lo4j配置 2.读取spring文件配置 3.设计路径变量值 4.spring字符集过滤器 5.登陆过滤器 6.springMVC核心配置 7.session过期时间 8.错误页面跳转 以下是实例: <?xml version="1.0" encoding="UTF-8"?> <web-app xmlns:xsi="http://www.w3.org/2001/XMLSch

maven工程web层的spring配置文档

1.引入jdbc配置文档 2.扫描文件 3.上传文件的设置 下面是例子: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:context="http://www.springframework.org/schema/context" xmlns:xsi

httpd主配置文档的介绍及小练习

一.httpd 主配置文档的介绍/etc/httpd/conf/httpd.conf ### Section 1: Global Environment 全局环境 ServerRoot "/etc/httpd" 主服务程序在这个目录下 PidFile run/httpd.pid Pid 在主服务目录下的这个文件 Timeout 60 超时时间为60秒 KeepAlive Off 持久连接关闭 MaxKeepAliveRequests 100 最大连接数 KeepAliveTimeout