############################################################
##### 服务器架构 ##
############################################################
服务器:vm_test1:
网卡: eth0 192.168.1.213 用于外网通信
网卡: eth1 192.168.2.213 用于和另外一台服务器通信
磁盘两块
服务器:vm_test2:
网卡: eth0 192.168.1.214 用于外网通信
网卡: eth1 192.168.2.214 用于和另外一台服务器通信
磁盘两块
mysql对外提供服务的vip 192.168.1.215
############################################################
##### 前期准备工作 ############
############################################################
1 网络配置 (两台服务器都要配置)
略
2 配置主机名
vm_test1:
动态改(本次生效)
[[email protected]_test1 ~]# hostname vm_test1
更改配置文件(重启后同样生效)
[[email protected]_test1 ~]# vi /etc/sysconfig/network
HOSTNAME=vm_test1
更改hosts文件
[[email protected]_test1 ~]# vi /etc/hosts
192.168.2.213 vm_test1
192.168.2.214 vm_test2
vm_test2:
动态改(本次生效)
[[email protected]_test2 ~]# hostname vm_test2
更改配置文件(重启后同样生效)
[[email protected]_test2 ~]# vi /etc/sysconfig/network
HOSTNAME=vm_test1
更改hosts文件
[[email protected]_test2 ~]# vi /etc/hosts
192.168.2.213 vm_test1
192.168.2.214 vm_test2
3 设置时间同步 (两台服务器的操作是一样的)
vm_test1:
[[email protected]_test1 ~]# yum clean all
[[email protected]_test1 ~]# yum -y install ntp
[[email protected]_test1 ~]# ntpdate us.pool.ntp.org;hwclock --systohc
vm_test2:
[[email protected]_test2 ~]# yum clean all
[[email protected]_test2 ~]# yum -y install ntp
[[email protected]_test2 ~]# ntpdate us.pool.ntp.org;hwclock --systohc
4 关闭selinux (两台服务器的操作是一样的)
vm_test1:
[[email protected]_test1 ~]# vi /etc/selinux/config
SELINUX=disabled
vm_test2:
[[email protected]_test2 ~]# vi /etc/selinux/config
SELINUX=disabled
5 关闭防火墙 (先没有测试防火墙部分所以先关闭)
vm_test1:
[[email protected]_test1 ~]# service iptables stop
[[email protected]_test1 ~]# chkconfig iptables off
vm_test2:
[[email protected]_test2 ~]# service iptables stop
[[email protected]_test2 ~]# chkconfig iptables off
6 双机互信配置
vm_test1:
[[email protected]_test1 ~]# ssh-keygen -t rsa -f /root/.ssh/id_rsa -P ‘‘
[[email protected]_test1 ~]# ssh-copy-id /root/.ssh/id_rsa.pub [email protected]_test2
vm_test2:
[[email protected]_test2 ~]# ssh-keygen -t rsa -f /root/.ssh/id_rsa -P ‘‘
[[email protected]_test2 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]_test1
vm_test1:
[[email protected]_test1 ~]# ssh [email protected]_test2 -t "ifconfig eth1"
eth1 Link encap:Ethernet HWaddr 00:16:3E:4A:D0:C1
inet addr:192.168.2.214 Bcast:192.168.2.255 Mask:255.255.255.0
vm_test2:
[[email protected]_test2 ~]# ssh [email protected]_test1 -t "ifconfig eth1"
eth1 Link encap:Ethernet HWaddr 00:16:3E:AE:8C:3A
inet addr:192.168.2.213 Bcast:192.168.2.255 Mask:255.255.255.0 # 互信成功
7 重启两台服务器 (如果以前selinux是关闭状态就不用重启)
#################################################################################
## 第二块硬盘配置成lvm逻辑卷方便以后动态扩展两台服务器上操作一模一样 #
#################################################################################
[[email protected]_test1 ~]# fdisk /dev/xvdb
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-382, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-382, default 382): +1G
Command (m for help): p
Disk /dev/xvdb: 3145 MB, 3145728000 bytes (中间有些信息省略)
。。。。。。。
Disk identifier: 0x67d350c9
Device Boot Start End Blocks Id System
/dev/xvdb1 1 132 1060258+ 83 Linux
Command (m for help): t
Hex code (type L to list codes): 8e
Changed system type of partition 1 to 8e (Linux LVM)
Command (m for help): p
Disk /dev/xvdb: 3145 MB, 3145728000 bytes
。。。。。。
Disk identifier: 0x67d350c9
Device Boot Start End Blocks Id System
/dev/xvdb1 1 132 1060258+ 8e Linux LVM
Command (m for help): w
The partition table has been altered!
[[email protected]_test1 ~]# partx /dev/xvdb # 加载分区信息
[[email protected]_test1 ~]# pvcreate /dev/xvdb1 # 创建pv
[[email protected]_test1 ~]# vgcreate -s 16M data_group /dev/xvdb1 # 创建vg,并指定vg名称为data_group
[[email protected]_test1 ~]# lvcreate -L 500M -n mysql_data data_group # 创建用来保存mysql数据的lv并指定名称为mysql_data,大小为500M
[[email protected]_test1 ~]# mkfs.ext4 /dev/data_group/mysql_data # 格式化为ext4的文件系统
[[email protected]_test1 ~]# mkdir -p /data/mysql # 创建一个文件夹用来保存mysql的数据
[[email protected]_test1 ~]# mount /dev/data_group/mysql_data /data/mysql # 挂载测试一下
[[email protected]_test1 ~]# cd /data/mysql/
[[email protected]_test1 mysql]# touch aa.txt # 可以写入数据说明成功
[[email protected]_test1 mysql]# cd
[[email protected]_test1 ~]# umount /data/mysql # 先卸载掉
############################################################################################
# drbd的安装与配置 ##
############################################################################################
1 安装
vm_test1:
[[email protected]_test1 ~]# rpm -Uvh http://www.elrepo.org/elrepo-release-6-6.el6.elrepo.noarch.rpm
[[email protected]_test1 ~]# yum -y --enablerepo=elrepo install drbd83-utils kmod-drbd83
vm_test2:
[[email protected]_test2 ~]# rpm -Uvh http://www.elrepo.org/elrepo-release-6-6.el6.elrepo.noarch.rpm
[[email protected]_test2 ~]# yum -y --enablerepo=elrepo install drbd83-utils kmod-drbd83
2 加载模块,并检验是否已经支持
vm_test1:
[[email protected]_test1 ~]# modprobe drbd
[[email protected]_test1 ~]# lsmod | grep drbd
drbd 332493 0
vm_test2:
[[email protected]_test2 ~]# modprobe drbd
[[email protected]_test2 ~]# lsmod | grep drbd
drbd 332493 0
3 修改配置文件,添加资源
vm_test1:
[[email protected]_test1 ~]# cp /etc/drbd.conf /etc/drbd.conf.bak # 备份主配置文件
[[email protected]_test1 ~]# vi /etc/drbd.conf # 里面默认已经配置好了,这里我们不用动
[[email protected]_test1 ~]# vi /etc/drbd.d/global_common.conf # 更改全局配置文件
global {
usage-count no; # 不参与统计
# minor-count dialog-refresh disable-ip-verification
}
common {
protocol C;
syncer {
rate 1000M; # 配置同步带宽
}
}
# 添加资源配置文件
[[email protected]_test1 ~]# vi /etc/drbd.d/mysql_data.res
resource mysql_data {
net {
cram-hmac-alg sha1; # 设置主备机之间通信使用的信息算法.
shared-secret "yumao123456"; # 共享密码
}
on vm_test1 { # 每个主机的说明以“on”开头,后面是主机名.在后面的{}中为这个主机的配置.
device /dev/drbd0; # 一个DRBD设备(即:/dev/drbdX),叫做“资源“.里面包含一个DRBD设备的主备节点的相关信息.
disk /dev/data_group/mysql_data; # /dev/drbd0使用的磁盘分区是/dev/sdb1
address 192.168.2.213:7789;
meta-disk internal;
}
on vm_test2 {
device /dev/drbd0;
disk /dev/data_group/mysql_data;
address 192.168.2.214:7789;
meta-disk internal;
}
}
vm_test2:
操作同vm_test1
4 创建资源
vm_test1:
[[email protected]_test1 ~]# drbdadm create-md mysql_data
执行上条命令出现错误exited with code 40 则先往文件系统中写入一些东西
[[email protected]_test1 ~]# dd if=/dev/zero of=/dev/data_group/mysql_data bs=1M count=5 (# 只能在一个节点上执行)
[[email protected]_test1 ~]# drbdadm create-md mysql_data # 再一次创建资源
[[email protected]_test1 ~]# drbdadm up mysql_data # 出现错误的话执行下面的操作
0: Failure: (124) Device is attached to a disk (use detach first)
[[email protected]_test1 ~]# drbdadm detach mysql_data # 由于开始创建一次资源失败,先断开资源然后
[[email protected]_test1 ~]# drbdadm attach mysql_data # 连接
vm_test2:
同vm_test1
5 设置主节点
vm_test1:
[[email protected]_test1 ~]# drbdadm -- --overwrite-data-of-peer primary mysql_data (# 只能在将要设置成主节点的服务器上执行)
[[email protected]_test1 ~]# drbdadm primary --force mysql_data (# 只能在主节点上执行)
[[email protected]_test1 ~]# cat /proc/drbd (# 查看两个节点之间数据同步状态)
version: 8.3.16 (api:88/proto:86-97)
GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by [email protected], 2014-11-24 14:51:37
0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----
ns:21120 nr:0 dw:0 dr:21784 al:0 bm:1 lo:0 pe:2 ua:0 ap:0 ep:1 wo:f oos:503140
[>....................] sync‘ed: 4.7% (503140/524236)K
finish: 0:28:26 speed: 284 (272) K/sec
6 测试挂载 (以下只能在主节点上执行)
vm_test1:
[[email protected]_test1 ~]# mkfs.ext4 /dev/drbd0 # 格式化
[[email protected]_test1 ~]# mount /dev/drbd0 /data/mysql # 挂载
[[email protected]_test1 ~]# cd /data/mysql/
[[email protected]_test1 mysql]# ls
lost+found
[[email protected]_test1 mysql]# touch 111.txt
[[email protected]_test1 mysql]# cd
[[email protected]_test1 ~]# umount /data/mysql # 卸载
7 关闭drbd服务,把它的启动交给Pacemaker来管理(两台服务器上都要执行)
vm_test1:
[[email protected]_test1 ~]# chkconfig drbd off # 关闭drbd的开机启动,它的开机交给Pacemaker来管理
[[email protected]_test1 ~]# chkconfig --list drbd
drbd 0:off 1:off 2:off 3:off 4:off 5:off 6:off
[[email protected]_test1 ~]# service drbd stop
Stopping all DRBD resources: .
vm_test2:
同vm_test1
#############################################################################
# 安装和配置CoroSync + Pacemaker #
#############################################################################
1 安装Corosync (两台服务器上都要装)
vm_test1:
[[email protected]_test1 ~]# yum -y install corosync
vm_test2:
同vm_test1
2 配置Corosync
vm_test1:
[[email protected]_test1 corosync]# cp corosync.conf.example corosync.conf
[[email protected]_test1 corosync]# vi /etc/corosync/corosync.conf # 修改配置文件 (在线上用的时候要把注释去除)
# Please read the corosync.conf.5 manual page
compatibility: whitetank
totem {
version: 2
secauth: on
threads: 0
interface {
ringnumber: 0
bindnetaddr: 192.168.1.0 # 绑定的网络地址
mcastaddr: 226.56.3.1 # 多播地址
mcastport: 5405 # 多播端口
ttl: 1
}
}
logging {
fileline: off
to_stderr: no
to_logfile: yes
logfile: /var/log/cluster/corosync.log
to_syslog: no
debug: off
timestamp: on
logger_subsys {
subsys: AMF
debug: off
}
}
amf {
mode: disabled
}
service {
ver: 0
name: pacemaker # lrm托管给pacemaker
}
aisexec {
user: root
group: root
}
3 vm_test1的配置文件copy到vm_test2上
vm_test1:
[[email protected]_test1 ~]# scp -p /etc/corosync/corosync.conf [email protected]_test2:/etc/corosync/
4 生成authkey
vm_test1:
[[email protected]_test1 ~]# cd /etc/corosync/
[[email protected]_test1 corosync]# corosync-keygen
5 把vm_test1上的密钥copy到vm_test2上
vm_test1:
[[email protected]_test1 corosync]# scp /etc/corosync/authkey [email protected]_test2:/etc/corosync/ # 确保密钥的权限为400
6 启动服务 (两个节点都要执行)
vm_test1:
[[email protected]_test1 ~]# service corosync start
vm_test2:
同vm_test1
7 安装pacemaker (两个节点都要执行)
vm_test1:
[[email protected]_test1 ~]# yum -y install pacemaker
vm_test2:
同vm_test1
8 安装crm (两个节点都要执行)
vm_test1:
[[email protected]_test1 ~]# yum install -y python-dateutil python-lxml
[[email protected]_test1 ~]# tar zxvf pssh-2.3.1.tar.gz
[[email protected]_test1 ~]# cd pssh-2.3.1
[[email protected]_test1 pssh-2.3.1]# python setup.py install
[[email protected]_test1 pssh-2.3.1]# cd ../
[[email protected]_test1 ~]# tar zxvf PyYAML-3.11.tar.gz
[[email protected]_test1 ~]# cd PyYAML-3.11
[[email protected]_test1 PyYAML-3.11]# python setup.py install
[[email protected]_test1 PyYAML-3.11]# cd ../
[[email protected]_test1 rpm]# rpm -ivh crmsh-2.1-1.6.x86_64.rpm --nodeps
vm_test2:
同vm_test1
9 重启corosync (两个节点上都要执行)
vm_test1:
[[email protected]_test1 ~]# /etc/init.d/corosync restart
vm_test2:
同vm_test1
10 查看启动状态
vm_test1:
[[email protected]_test1 ~]# grep -e "Corosync Cluster Engine" -e "configuration file" /var/log/cluster/corosync.log # 查看corosync引擎是否正常启动
[[email protected]_test1 ~]# grep TOTEM /var/log/cluster/corosync.log # 查看初始化成员节点通知是否正常发出
[[email protected]_test1 ~]# grep ERROR: /var/log/cluster/corosync.log # 检查启动过程中是否有错误产生
[[email protected]_test1 ~]# grep pcmk_startup /var/log/cluster/corosync.log # 查看pacemaker是否正常启动
[[email protected]_test1 ~]# crm status # 查看集群状态信息
Last updated: Sat Apr 18 23:06:51 2015
Last change: Sat Apr 18 22:33:12 2015
Stack: classic openais (with plugin)
Current DC: vm_test1 - partition with quorum
Version: 1.1.11-97629de
2 Nodes configured, 2 expected votes
0 Resources configured
Online: [ vm_test1 vm_test2 ] # vm_test1 vm_test2 都在线 DC为vm_test1
####################################################################
### 安装mysql两台服务器操作相同 #
####################################################################
vm_test1:
[[email protected]_test1 ~]# rpm --import http://yum.mariadb.org/RPM-GPG-KEY-MariaDB
[[email protected]_test1 ~]# vim /etc/yum.repos.d/MariaDB.repo
# MariaDB 10.0 CentOS repository list - created 2014-03-15 08:00 UTC
# http://mariadb.org/mariadb/repositories/
[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/10.0/centos6-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1
[[email protected]_test1 ~]# yum clean all
[[email protected]_test1 ~]# yum -y install MariaDB-server MariaDB-client # 安装MariaDB
[[email protected]_test1 ~]# id mysql
uid=498(mysql) gid=498(mysql) groups=498(mysql) # 确保两台服务器mysql用户id相同
[[email protected]_test1 ~]# chkconfig mysql off # 关闭mysql服务的开机启动
vm_test2:
同vm_test1
#########################################################################
# 先启用一次drbd,在主节点上挂载创建一个测试数据库 #####
#########################################################################
1 mysql配置与初始化
vm_test1:
[[email protected]_test1 ~]# service drbd start
vm_test2:
[[email protected]_test2 ~]# service drbd start
vm_test1:
[[email protected]_test1 ~]# drbdadm primary mysql_data
[[email protected]_test1 ~]# mount /dev/drbd0 /data/mysql
vm_test1:
[[email protected]_test1 ~]# chown -R mysql:mysql /data/mysql # 更改文件夹的所属组
vm_test2:
[[email protected]_test2 ~]# chown -R mysql:mysql /data/mysql
vm_test1:
[[email protected]_test1 ~]# mv /var/lib/mysql/* /data/mysql/
[[email protected]_test1 ~]# rm -rf /var/lib/mysql
[[email protected]_test1 ~]# ln -s /data/mysql /var/lib/ # 创建软连接
[[email protected]_test1 ~]# service mysql start # 启动mysql服务
[[email protected]_test1 ~]# mysql_secure_installation # 初始化mysql
2 创建一个数据库用来测试
vm_test1:
[[email protected]_test1 ~]# mysql -u root -p
Enter password:
MariaDB [(none)]> create database mysql_testdb;
#####################################################################################
# drbd主从切换 在从节点上设置和启动mysql查看是否有在主节点上创建的数据库 #
#####################################################################################
1 把资源手动从vm_test1切换到vm_test2
vm_test1:
[[email protected]_test1 ~]# service mysql stop # 停止主节点mysql服务
[[email protected]_test1 ~]# umount /data/mysql # 卸载drbd文件系统
[[email protected]_test1 ~]# drbdadm secondary mysql_data # 降为drbd从节点
2 配置vm_test2上的mysql
vm_test2:
[[email protected]_test2 ~]# drbdadm primary mysql_data # 升为主节点
[[email protected]_test2 ~]# mount /dev/drbd0 /data/mysql # 挂载
[[email protected]_test2 ~]# rm -rf /var/lib/mysql # 删除原来默认的mysql用户数据
[[email protected]_test2 ~]# ln -s /data/mysql /var/lib/ # 创建软连接
[[email protected]_test2 ~]# service mysql start # 启动mysql服务
Starting MySQL. SUCCESS!
[[email protected]_test2 ~]# mysql -u root -p # 登录mysql
Enter password:
MariaDB [(none)]> show databases;
+---------------------+
| Database |
+---------------------+
| #mysql50#lost+found |
| information_schema |
| mysql |
| mysql_testdb |
| performance_schema |
+---------------------+ # mysql_testdb 存在。说明它们的数据是同步的
############################################################################
#### 停止所有的资源 用crm配置资源 ########
############################################################################
vm_test2:
[[email protected]_test2 ~]# service mysql stop
Shutting down MySQL.. SUCCESS!
[[email protected]_test2 ~]# umount /data/mysql
[[email protected]_test2 ~]# service drbd stop
Stopping all DRBD resources: .
vm_test1:
[[email protected]_test1 ~]# service drbd stop
Stopping all DRBD resources: .
# 注:vm_test1上的mysql服务在上面已经停止了
############################################################################
#### 用crm资源配置 ########
############################################################################
关于crm的资源配置在一个节点上进行就可以了。这里我们选择vm_test1
1:禁用STONISH、忽略法定票数、设置资源粘性
[[email protected]_test1 rpm]# crm
crm(live)# configure
crm(live)configure# property stonith-enabled=false
crm(live)configure# property no-quorum-policy=ignore
crm(live)configure# rsc_defaults resource-stickiness=100
crm(live)configure# show
node vm_test1
node vm_test2
property cib-bootstrap-options: \
dc-version=1.1.11-97629de \
cluster-infrastructure="classic openais (with plugin)" \
expected-quorum-votes=2 \
stonith-enabled=false \
no-quorum-policy=ignore
rsc_defaults rsc-options: \
resource-stickiness=100
crm(live)configure# verify
crm(live)configure# commit
2:添加brbd资源
crm(live)configure# primitive mysql_drbd ocf:linbit:drbd params drbd_resource=mysql_data op start timeout=240 op stop timeout=100 op monitor role=Master interval=20 timeout=30 op monitor role=Slave interval=30 timeout=30
crm(live)configure# master ms_mysql_drbd mysql_drbd meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true
crm(live)configure# verify
crm(live)configure# commit
3:添加文件系统挂载资源
crm(live)configure# primitive drbd_fs ocf:heartbeat:Filesystem params device=/dev/drbd0 directory=/data/mysql fstype=ext4 op monitor interval=30s timeout=40s op start timeout=60 op stop timeout=60 on-fail=restart
crm(live)configure# verify
4 把文件系统资源和drbd资源绑定在一组,并且定义启动顺序(文件系统挂载在drbd之后)。#注意冒号后面的空格
crm(live)configure# colocation drbd_with_mysqlfs inf: drbd_fs ms_mysql_drbd:Master
crm(live)configure# order fs_after_ms_mysql_drbd mandatory: ms_mysql_drbd:promote drbd_fs:start
crm(live)configure# verify
crm(live)configure# commit
5: 添加mysql资源和文件系统资绑定在一起,并且在文件系统资源之后启动
crm(live)configure# primitive mysqld lsb:mysql
crm(live)configure# colocation mysql_with_fs inf: mysqld drbd_fs
crm(live)configure# order mysqld_after_fs mandatory: drbd_fs mysqld
crm(live)configure# verify
crm(live)configure# commit
6:配置vip资源
crm(live)configure# primitive mysql_ip ocf:heartbeat:IPaddr params ip=192.168.1.215 op monitor interval=30 timeout=20 on-fail=restart
crm(live)configure# colocation mysql_ip_with_mysqld inf: mysql_ip mysqld
crm(live)configure# order mysql_ip_after_mysqld Mandatory: mysqld mysql_ip
crm(live)configure# verify
crm(live)configure# commit
7: 查看运行状态
[[email protected]_test1 ~]# crm status
Last updated: Wed Apr 22 22:36:49 2015
Last change: Wed Apr 22 22:36:48 2015
Stack: classic openais (with plugin)
Current DC: vm_test2 - partition with quorum
Version: 1.1.11-97629de
2 Nodes configured, 2 expected votes
5 Resources configured
Online: [ vm_test1 vm_test2 ]
Master/Slave Set: ms_mysql_drbd [mysql_drbd]
Masters: [ vm_test2 ]
Slaves: [ vm_test1 ]
drbd_fs (ocf::heartbeat:Filesystem): Started vm_test2
mysqld (lsb:mysql): Started vm_test2
mysql_ip (ocf::heartbeat:IPaddr): Started vm_test2
8:把vm_test2 standby 再次查看运行状态
[[email protected]_test1 ~]# crm node standby vm_test2
[[email protected]_test1 ~]# crm status
Last updated: Wed Apr 22 22:38:08 2015
Last change: Wed Apr 22 22:37:59 2015
Stack: classic openais (with plugin)
Current DC: vm_test2 - partition with quorum
Version: 1.1.11-97629de
2 Nodes configured, 2 expected votes
5 Resources configured
Node vm_test2: standby
Online: [ vm_test1 ]
Master/Slave Set: ms_mysql_drbd [mysql_drbd]
Masters: [ vm_test1 ]
Stopped: [ vm_test2 ]
drbd_fs (ocf::heartbeat:Filesystem): Started vm_test1
mysqld (lsb:mysql): Started vm_test1
mysql_ip (ocf::heartbeat:IPaddr): Started vm_test1