实验环境:vmware workstation
os:Centos5.8 x86_64
编辑两台虚拟机分别新增一块网卡作为心跳检测,新增一块4G的硬盘,大小保持一致
两台机器的基本情况如下所示:
centos1.mypharma.com 192.168.150.100,心跳线为:10.10.10.2(VM2网段)
centos2.mypharma.com 192.168.150.101,心跳线为:10.10.10.3(VM2网段)
heartbeat的vip为 192.168.150.128
一、实验前的准备工作
①drbd1的hosts文件内容如下所示:
[[email protected] ~]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
192.168.150.100 centos1.mypharma.com
192.168.150.101 centos2.mypharma.com
②drbd1的hostname:
[[email protected] ~]# cat /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=yes
HOSTNAME=centos1.mypharma.com
③关闭iptables、SElinux
[[email protected] ~]# setenforce 0
setenforce: SELinux is disabled
[[email protected] ~]# service iptables stop
④检查磁盘
[[email protected] ~]# fdisk -l
Disk /dev/sda: 85.8 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 25 200781 83 Linux
/dev/sda2 26 1057 8289540 82 Linux swap / Solaris
/dev/sda3 1058 10443 75393045 83 Linux
Disk /dev/sdb: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb doesn‘t contain a valid partition table
dbrd2做相同的操作。
二、DRBD的安装
dbrd1
yum -y install drbd83 kmod-drbd83
modprobe drbd
[[email protected] ~]# lsmod | grep drbd
drbd 321608 0
dbrd2
yum -y install drbd83 kmod-drbd83
modprobe drbd
[[email protected]2 ~]# lsmod | grep drbd
drbd 321608 0
如果能正确显示,表明DRBD已经安装成功
两台机器的drbd.conf配置文件内容如下所示(两台机器的配置是一样的):
[[email protected] ~]# cat /etc/drbd.conf
global {
usage-count no;
}
common {
syncer { rate 30M; }
}
resource r0 {
protocol C;
handlers {
pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
# fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
# split-brain "/usr/lib/drbd/notify-split-brain.sh root";
# out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";
# before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k";
# after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
}
startup {
# wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb
wfc-timeout 120;
degr-wfc-timeout 120;
}
disk {
# on-io-error fencing use-bmbv no-disk-barrier no-disk-flushes
# no-disk-drain no-md-flushes max-bio-bvecs
on-io-error detach;
}
net {
# sndbuf-size rcvbuf-size timeout connect-int ping-int ping-timeout max-buffers
# max-epoch-size ko-count allow-two-primaries cram-hmac-alg shared-secret
# after-sb-0pri after-sb-1pri after-sb-2pri data-integrity-alg no-tcp-cork
max-buffers 2048;
cram-hmac-alg "sha1";
shared-secret "123456";
#allow-two-primaries;
}
syncer {
rate 30M;
# rate after al-extents use-rle cpu-mask verify-alg csums-alg
}
on centos1.mypharma.com {
device /dev/drbd0;
disk /dev/sdb;
address 10.10.10.2:7788;
meta-disk internal;
}
on centos2.mypharma.com {
device /dev/drbd0;
disk /dev/sdb;
address 10.10.10.3:7788;
meta-disk internal;
}
}
创建DRBD元数据信息
[[email protected] ~]# drbdadm create-md r0
Writing meta data...
initializing activity log
NOT initialized bitmap
New drbd meta data block successfully created.
将centos1的机器作为DRBD的Primary机器,命令如下所示:
[[email protected] ~]# drbdsetup /dev/drbd0primary -o
[[email protected] ~]# drbdadm primary r0
[[email protected] ~]# mkfs.ext3 /dev/drbd0
[[email protected] ~]# mkdir -p /drbd
[[email protected] ~]# mount /dev/drbd0 /drbd
[[email protected] ~]# chkconfig drbd on
centos2机器
[[email protected] ~]# mkdir -p /drbd
[[email protected] ~]# chkconfig drbd on
三、Heartbeat的安装和部署。
两台机器上分别用yum来安装heartbeat,如下命令操作二次:
yum -y install heartbeat
①编辑/etc/ha.d/ha.cf
drbd1
[[email protected] ~]# cat /etc/ha.d/ha.cf
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 15
ucast eth1 10.10.10.3
auto_failback off
node centos1.mypharma.com centos2.mypharma.com
drbd2
[[email protected] ~]# cat /etc/ha.d/ha.cf
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 15
ucast eth1 10.10.10.2
auto_failback off
node centos1.mypharma.com centos2.mypharma.com
②编辑双机互连验证文件authkeys
drbd1
[[email protected] ~]# cat /etc/ha.d/authkeys
auth 1
1 crc
[r[email protected] ~]# chmod 600 /etc/ha.d/authkeys
drbd2
[[email protected] ~]# cat /etc/ha.d/authkeys
auth 1
1 crc
[[email protected] ~]# chmod 600 /etc/ha.d/authkeys
③编辑集群资源文件/etc/ha.d/haresources
drbd1
[[email protected] ~]# cat /etc/ha.d/haresources
centos1.mypharma.com IPaddr::192.168.150.128/24/eth0 drbddisk::r0 Filesystem::/dev/drbd0::/drbd::ext3 killnfsd
drbd2
[[email protected] ~]# cat /etc/ha.d/haresources
centos1.mypharma.com IPaddr::192.168.150.128/24/eth0 drbddisk::r0 Filesystem::/dev/drbd0::/drbd::ext3 killnfsd
④编辑/etc/ha.d/resource.d/killnfsd
drbd1
[[email protected] ~]# cat /etc/ha.d/resource.d/killnfsd
killall -9 nfsd;/etc/init.d/nfs restart;exit 0
[[email protected] ~]# chmod +x /etc/ha.d/resource.d/killnfsd
drbd2
[[email protected] ~]# cat /etc/ha.d/resource.d/killnfsd
killall -9 nfsd;/etc/init.d/nfs restart;exit 0
[[email protected] ~]# chmod +x /etc/ha.d/resource.d/killnfsd
⑤主从机器上面配置下nfs服务的/etc/exports,其文件内容如下:
/drbd 192.168.150.0/255.255.255.0(rw,sync,no_root_squash,no_all_squash)
service portmap start
chkconfig portmap on
在两台机器上将DRBD和Heartbeat都设成自启动方式。
service drbd start
chkcfonig drbd on
service heartbeat start
chkconfig heartbeat on