一:Corosync+Pacemaker
Pacemaker是最流行的CRM(集群资源管理器),是从heartbeat v3中独立出来的资源管理器,同时Corosync+Pacemaker是最流行的高可用集群的套件.
二:DRBD
DRBD (Distributed Replicated Block Device,分布式复制块设备)是由内核模块和相关脚本而构成,用以构建高可用性的集群。其实现方式是通过网络来镜像整个设备。你可以把它看作是一种网络RAID1。
三:试验拓扑图
四:试验环境准备(centos6.5.x86_64)
drbd-8.4.3-33.el6.x86_64.rpm
drbd-kmdl-2.6.32-431.el6-8.4.3-33.el6.x86_64.rpm
crmsh-1.2.6-4.el6.x86_64.rpm
mariadb-5.5.36-linux-x86_64.tar.gz
corosync.x86_64-1.4.1-17.el6
五:实验配置
1)配置各节点互相解析
配置node1
[[email protected] ~]# uname -n node3 [[email protected] ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.16.0.1 server.magelinux.com server 172.16.16.1 node1 172.16.16.2 node2 172.16.16.3 node3 172.16.16.4 node4 172.16.16.5 node5
配置node2
[[email protected] ~]# uname -n node4 [[email protected] ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.16.0.1 server.magelinux.com server 172.16.16.1 node1 172.16.16.2 node2 172.16.16.3 node3 172.16.16.4 node4 172.16.16.5 node5
配置各节点时间同步
[[email protected] ~]# ntpdate 172.16.0.1 [[email protected] ~]# ntpdate 172.16.0.1 #172.16.0.1为时间服务器
配置各节点ssh互信
node4上的操作
[[email protected] ~]# ssh-keygen -t rsa -f /root/.ssh/id_rsa -P ‘‘ [[email protected] ~]# ssh-copy-id -i .ssh/id_rsa.pub [email protected] #发给node3
node3上的操作
[[email protected] ~]# ssh-keygen -t rsa -f /root/.ssh/id_rsa -P ‘‘ [[email protected] ~]# ssh-copy-id -i .ssh/id_rsa.pub [email protected] #发给node4
2)安装配置corosync
node3
[[email protected] ~]# yum install corosync
node4
[[email protected] ~]# yum install corosync
配置node3上的corosync
[[email protected] ~]# cd /etc/corosync/ [[email protected] corosync]# cp corosync.conf.example corosync.conf [[email protected] corosync]# vim corosync.conf
# Please read the corosync.conf.5 manual page compatibility: whitetank totem { version: 2 secauth: on #启动认证 threads: 0 interface { ringnumber: 0 bindnetaddr: 172.16.0.0 #心跳主机网段 mcastaddr: 226.94.16.1 #组播传递心跳信息 mcastport: 5405 ttl: 1 } } logging { fileline: off to_stderr: no to_logfile: yes to_syslog: yes logfile: /var/log/cluster/corosync.log #日志位置 debug: off timestamp: on logger_subsys { subsys: AMF debug: off } } amf { mode: disabled } service { ver: 0 name: pacemaker } aisexec { user: root group: root }
生成密钥文件
前面的几部操作为了节约生成密钥的时间.random是根据敲击键盘的频率来生成密钥,如果之前没有足够的生成密钥所需要的信息那么需要你不停的敲击键盘.
[[email protected] corosync]# mv /dev/{random,random.bak} [[email protected] corosync]# ln -s /dev/urandom /dev/random [[email protected] corosync]# corosync-keygen Corosync Cluster Engine Authentication key generator. Gathering 1024 bits for key from /dev/random. Press keys on your keyboard to generate entropy. Writing corosync key to /etc/corosync/authkey.
查看是否已经生成密钥文件
[[email protected] corosync]# ll
total 24
-r-------- 1 root root 128 Oct 11 19:52 authkey #此为密钥文件
-rw-r--r-- 1 root root 537 Oct 11 19:49 corosync.conf
-rw-r--r-- 1 root root 445 Nov 22 2013 corosync.conf.example
-rw-r--r-- 1 root root 1084 Nov 22 2013 corosync.conf.example.udpu
drwxr-xr-x 2 root root 4096 Nov 22 2013 service.d
drwxr-xr-x 2 root root 4096 Nov 22 2013 uidgid.d
将配置文件和密钥复制到node4节点上
[[email protected] corosync]# scp authkey corosync.conf node4:/etc/corosync/ authkey 100% 128 0.1KB/s 00:00 corosync.conf 100% 537 0.5KB/s 00:00
3)安装配置pacemaker和crm
[[email protected] ~]# yum install pacemaker
先获得crmsh-1.2.6-4.el6.x86_64.rpm 包
[[email protected] ~]# rpm -ivh crmsh-1.2.6-4.el6.x86_64.rpm error: Failed dependencies: pssh is needed by crmsh-1.2.6-4.el6.x86_64 python-dateutil is needed by crmsh-1.2.6-4.el6.x86_64 python-lxml is needed by crmsh-1.2.6-4.el6.x86_64
需要解决依赖关系
[[email protected] ~]# yum install python-dateutil python-lxml [[email protected] ~]# rpm -ivh crmsh-1.2.6-4.el6.x86_64.rpm --nodeps Preparing... ########################################### [100%] 1:crmsh ########################################### [100%]
4)node4安装pacemaker和crm方法同node3
启动node3和node4的pacemaker
[[email protected] ~]# service corosync start [[email protected] ~]# service corosync start
查看corosync引擎是否成功启动
[[email protected] ~]# grep -e "Corosync Cluster Engine" -e "configuration file" /var/log/cluster/corosync.log Oct 11 20:19:35 corosync [MAIN ] Corosync Cluster Engine (‘1.4.1‘): started and ready to provide service. #说明已经启动准备好了 Oct 11 20:19:35 corosync [MAIN ] Successfully read main configuration file ‘/etc/corosync/corosync.conf‘
查看初始化成员节点通知是否正常发出
[[email protected] ~]# grep TOTEM /var/log/cluster/corosync.log Oct 11 20:19:35 corosync [TOTEM ] Initializing transport (UDP/IP Multicast). Oct 11 20:19:35 corosync [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0). Oct 11 20:19:35 corosync [TOTEM ] The network interface [172.16.16.3] is now up.
查看pacemaker是否正常启动
[[email protected] ~]# grep pcmk_startup /var/log/cluster/corosync.log Oct 11 20:19:35 corosync [pcmk ] info: pcmk_startup: CRM: Initialized Oct 11 20:19:35 corosync [pcmk ] Logging: Initialized pcmk_startup Oct 11 20:19:35 corosync [pcmk ] info: pcmk_startup: Maximum core file size is: 18446744073709551615 Oct 11 20:19:35 corosync [pcmk ] info: pcmk_startup: Service: 9 Oct 11 20:19:35 corosync [pcmk ] info: pcmk_startup: Local hostname: node3
查看集群状态
[[email protected] ~]# crm status Last updated: Sat Oct 11 20:29:24 2014 Last change: Sat Oct 11 20:19:35 2014 via crmd on node4 Stack: classic openais (with plugin) Current DC: node4 - partition with quorum Version: 1.1.10-14.el6-368c726 2 Nodes configured, 2 expected votes 0 Resources configured Online: [ node3 node4 ] #在线
5)安装DRBD
先获得drbd安装包drbd-kmdl-2.6.32-431.el6-8.4.3-33.el6.x86_64.rpm ; drbd-8.4.3-33.el6.x86_64.rpm
[[email protected] ~]# rpm -ivh drbd-kmdl-2.6.32-431.el6-8.4.3-33.el6.x86_64.rpm #先安装kmdl包 [[email protected] ~]# rpm -ivh drbd-8.4.3-33.el6.x86_64.rpm
6)node4安装方法同node3
7)配置DRBD
[[email protected] ~]# cat /etc/drbd.d/global_common.conf global { usage-count yes; #让linbit公司收集目前drbd的使用情况,yes为参加,我们这里不参加设置为no # minor-count dialog-refresh disable-ip-verification } common { handlers { gency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; gency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; wn.sh; echo o > /proc/sysrq-trigger ; halt -f"; } startup { # wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb } options { # cpu-mask on-no-data-accessible } disk { on-io-error detach; #添加这一项,同步出错分离 } net { cram-hmac-alg "sha1"; shared-secret "mydrbdlab"; #添加认证算法和认证密钥 } }
增加资源
node3
resource web { device /dev/drbd0; disk /dev/sda3; #sda3为事先创建好的分区和node4保持一致 on node3 { #节点为主机名 address 172.16.16.3:7788; meta-disk internal; } on node4 { address 172.16.16.4:7788; meta-disk internal; } }
同步DRBD配置文件到node4
[[email protected] drbd.d]# scp global_common.conf web.res node4:/etc/drbd.d/
node3与node4初始化资源
[[email protected] drbd.d]# drbdadm create-md web [[email protected] drbd.d]# drbdadm create-md web
启动DRBD
[[email protected] drbd.d]# service drbd start #两边需要同时启动 [[email protected] drbd.d]# service drbd start
查看两边的状
开始时两边都为secondary状态
[[email protected] drbd.d]# drbd-overview 0:web/0 Connected Secondary/Secondary Inconsistent/Inconsistent C r----- [[email protected] drbd.d]# drbd-overview 0:web/0 Connected Secondary/Secondary Inconsistent/Inconsistent C r-----
选取node3为主节点
[[email protected] drbd.d]# drbdadm -- --overwrite-data-of-peer primary web [[email protected] drbd.d]# drbd-overview 0:web/0 Connected Primary/Secondary UpToDate/UpToDate C r----- [[email protected] drbd.d]# drbd-overview 0:web/0 Connected Secondary/Secondary Inconsistent/Inconsistent C r----- [[email protected] drbd.d]# drbd-overview 0:web/0 Connected Secondary/Primary UpToDate/UpToDate C r-----
看 node3有之前的secondary变成了primary,说明node3已经变成了主节点
进行格式化并挂载
[[email protected] ~]# mke2fs -t ext4 /dev/drbd0 #这主节点上进行格式化并挂载 [[email protected] ~]# mount /dev/drbd0 /mnt
设置node4为主节点
node3上面的操作
[[email protected] ~]# umount /mnt [[email protected] ~]# drbdadm secondary web [[email protected] ~]# drbd-overview 0:web/0 Connected Secondary/Secondary UpToDate/UpToDate C r-----
看:node3已经有primary变成了secondary
node4上面的操作
[[email protected] ~]# drbdadm primary web [[email protected] ~]# drbd-overview 0:web/0 Connected Primary/Secondary UpToDate/UpToDate C r----- [[email protected] ~]# mount /dev/drbd0 /mnt [[email protected] ~]# cd /mnt [[email protected] mnt]# ls
lost+found
看:node4 变成了secondary 而且可以在node4上面进行挂载.
说明:我们的drbd工作一切正常
8)安装mysql
node3上创建mysql用户与组
[[email protected] ~]# groupadd -g 3306 mysql [[email protected] ~]# useradd -u 3306 -g mysql -s /sbin/nologin -M mysql [[email protected] ~]# id mysql uid=3306(mysql) gid=3306(mysql) groups=3306(mysql)
node4上创建用户与组同node3
node3安装mysql
先获得mysql安装包 mariadb-5.5.36-linux-x86_64.tar.gz
[[email protected] ~]# tar xf mariadb-5.5.36-linux-x86_64.tar.gz -C /usr/local/ [[email protected] ~]# cd /usr/local/ [[email protected] local]# ln -sv mariadb-5.5.36-linux-x86_64 mysql `mysql‘ -> `mariadb-5.5.36-linux-x86_64‘ [[email protected] local]# cd mysql [[email protected] mysql]# chown root.mysql ./* [[email protected] mysql]# ll total 212 drwxr-xr-x 2 root mysql 4096 Oct 11 21:54 bin -rw-r--r-- 1 root mysql 17987 Feb 24 2014 COPYING -rw-r--r-- 1 root mysql 26545 Feb 24 2014 COPYING.LESSER drwxr-xr-x 3 root mysql 4096 Oct 11 21:54 data drwxr-xr-x 2 root mysql 4096 Oct 11 21:55 docs drwxr-xr-x 3 root mysql 4096 Oct 11 21:55 include -rw-r--r-- 1 root mysql 8694 Feb 24 2014 INSTALL-BINARY drwxr-xr-x 3 root mysql 4096 Oct 11 21:55 lib drwxr-xr-x 4 root mysql 4096 Oct 11 21:54 man drwxr-xr-x 11 root mysql 4096 Oct 11 21:55 mysql-test -rw-r--r-- 1 root mysql 108813 Feb 24 2014 README drwxr-xr-x 2 root mysql 4096 Oct 11 21:55 scripts drwxr-xr-x 27 root mysql 4096 Oct 11 21:55 share drwxr-xr-x 4 root mysql 4096 Oct 11 21:55 sql-bench drwxr-xr-x 4 root mysql 4096 Oct 11 21:54 support-files
提供配置文件
[[email protected] mysql]# cp support-files/my-large.cnf /etc/my.cnf cp: overwrite `/etc/my.cnf‘? y [[email protected] mysql]# vim /etc/my.cnf 增加一行 datadir = /mydata/data
挂载DRBD到/mydata/data
[[email protected] mysql]# mkdir -pv /mydata/data [[email protected] mysql]# mount /dev/drbd0 /mydata/data [[email protected] mysql]# chown -R mysql.mysql /mydata
初始化mysql
[[email protected] mysql]# scripts/mysql_install_db --datadir=/mydata/data/ --basedir=/usr/local/mysql --user=mysql
给mysql提供启动脚本
[[email protected] mysql]# cp /usr/local/mysql/support-files/mysql.server /etc/init.d/mysqld [[email protected] mysql]# chmod +x /etc/init.d/mysqld [[email protected] mysql]# service mysqld start Starting MySQL.... [ OK ]
给mysql提供客户端
node4安装mysql同node3
将node4作为主节点
[[email protected] ~]# umount /mnt [[email protected] ~]# drbdadm secondary web [[email protected] ~]# drbd-overview 0:web/0 Connected Secondary/Secondary UpToDate/UpToDate C r----- [[email protected] ~]# drbdadm primary web [[email protected] ~]# drbd-overview 0:web/0 Connected Primary/Secondary UpToDate/UpToDate C r-----
挂载DRBD
[[email protected] mysql]# mkdir -pv /mydata/data [[email protected] mysql]# mount /dev/drbd0 /mydata/data [[email protected] mysql]# chown -R mysql.mysql /mydata
把node3上的mysql配置文件发送到node4相应的目录中.
[[email protected] mysql]# scp /etc/my.cnf node4:/etc/ my.cnf 100% 4924 4.8KB/s 00:00 [[email protected] mysql]# scp /etc/init.d/mysqld node4:/etc/init.d/ mysqld 100% 12KB 11.6KB/s 00:00
测试能否启动
[[email protected] ~]# service mysqld start
Starting MySQL... [ OK ]
OK 说明成功启动
好了,到这里mysql配置全部完成
9)配置crmsh 资源管理
在配置crmsh之前要先把drbd停掉
关闭drbd并设置开机不启动
关闭node3
[[email protected] ~]# service drbd stop Stopping all DRBD resources: . [[email protected] ~]# chkconfig drbd off [[email protected]ode3 ~]# chkconfig drbd --list drbd 0:关闭 1:关闭 2:关闭 3:关闭 4:关闭 5:关闭 6:关闭
关闭node4
[[email protected] ~]# service drbd stop Stopping all DRBD resources: . [[email protected] ~]# chkconfig drbd off [[email protected] ~]# chkconfig drbd --list drbd 0:关闭 1:关闭 2:关闭 3:关闭 4:关闭 5:关闭 6:关闭
禁用STONISH、忽略法定票数
[[email protected] ~]# crm crm(live)# configure crm(live)configure# property stonith-enabled=false crm(live)configure# property no-quorum-policy=ignore crm(live)configure# verify crm(live)configure# commit
增加DRBD资源
[[email protected] ~]# crm crm(live)configure# primitive mysqldrbd ocf:linbit:drbd params drbd_resource=web op start timeout=240 op stop timeout=100 op monitor role=Master interval=20 timeout=30 op monitor role=Slave interval=30 timeout=30 crm(live)configure# ms ms_mysqldrbd mysqldrbd meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true
增加文件系统资源
crm(live)configure# primitive mystore ocf:heartbeat:Filesystem params device=/dev/drbd0 directory=/mydata/data fstype=ext4 op start timeout=60 op stop timeout=60 crm(live)configure# verify crm(live)configure# colocation mystore_with_ms_mysqldrbd inf: mystore ms_mysqldrbd:Master #mystor和mysqldrbd在一起 crm(live)configure# order mystore_after_ms_mysqldrbd mandatory: ms_mysqldrbd:promote mystore:start #mystore要晚于mysqldrbd启动 crm(live)configure# verify
增加mysql资源
crm(live)configure# primitive mysqld lsb:mysqld crm(live)configure# colocation mysqld_with_mystore inf: mysqld mystore #定义mysql和mysqlstore 在一起 crm(live)configure# verify
增加VIP资源
crm(live)configure# primitive myip ocf:heartbeat:IPaddr params ip=172.16.16.8 op monitor interval=30s timeout=20s crm(live)configure# colocation myip_with_ms_mysqldrbd_master inf: myip ms_mysqldrbd:Master #第一VIP和mysqldrbd在一起
查看一下配置
crm(live)configure# show node node3 attributes standby="on" node node4 primitive myip ocf:heartbeat:IPaddr params ip="172.16.16.8" op monitor interval="30s" timeout="20s" primitive mysqld lsb:mysqld primitive mysqldrbd ocf:linbit:drbd params drbd_resource="web" op start timeout="240" interval="0" op stop timeout="100" interval="0" op monitor role="Master" interval="20" timeout="30" op monitor role="Slave" interval="30" timeout="30" primitive mystore ocf:heartbeat:Filesystem params device="/dev/drbd0" directory="/mydata/data" fstype="ext4" op start timeout="60" interval="0" op stop timeout="60" interval="0" ms ms_mysqldrbd mysqldrbd meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" target-role="Started" colocation myip_with_ms_mysqldrbd inf: ms_mysqldrbd:Master myip colocation mysqld_with_mystore inf: mysqld mystore colocation mystore_with_ms_mysqldrbd inf: mystore ms_mysqldrbd:Master order mysqld_after_mystore inf: mystore mysqld order mystore_after_ms_mysqldrbd inf: ms_mysqldrbd:promote mystore:start property $id="cib-bootstrap-options" stonith-enabled="false" no-quorum-policy="ignore" dc-version="1.1.10-14.el6-368c726" cluster-infrastructure="classic openais (with plugin)" expected-quorum-votes="2" last-lrm-refresh="1413090998" rsc_defaults $id="rsc-options" resource-stickiness="100"
10)测试一下高可用mysql能否使用
首先查看一下高可用集群的状态
crm(live)# status Last updated: Sun Oct 12 14:27:51 2014 Last change: Sun Oct 12 14:27:47 2014 via crm_attribute on node3 Stack: classic openais (with plugin) Current DC: node3 - partition with quorum Version: 1.1.10-14.el6-368c726 2 Nodes configured, 2 expected votes 5 Resources configured Online: [ node3 node4 ] Master/Slave Set: ms_mysqldrbd [mysqldrbd] Masters: [ node3 ] Slaves: [ node4 ] mystore (ocf::heartbeat:Filesystem): Started node3 myip (ocf::heartbeat:IPaddr): Started node3
看 现在node3和node4都在线 ,此时node3为主节点,资源都在node3节点上.看下node3是否已经启动了mysql
可以看到mysql在node3节点上
现在我们来模拟node3节点下线
[[email protected] ~]# crm crm(live)# node crm(live)node# standby crm(live)node#
现在来查看下mysql高可用集群的状态
[[email protected] ~]# crm status Last updated: Sun Oct 12 14:39:38 2014 Last change: Sun Oct 12 14:39:22 2014 via crm_attribute on node3 Stack: classic openais (with plugin) Current DC: node3 - partition with quorum Version: 1.1.10-14.el6-368c726 2 Nodes configured, 2 expected votes 5 Resources configured Node node3: standby Online: [ node4 ] Master/Slave Set: ms_mysqldrbd [mysqldrbd] Masters: [ node4 ] Stopped: [ node3 ] mystore(ocf::heartbeat:Filesystem):Started node4 myip(ocf::heartbeat:IPaddr):Started node4
可以看到node4成为了主节点,node3处于standby状态,资源都转移到了node4上面.
可以看到mysql在node4节点上依然工作起来..
11)测试一下能否写入数据
给连接的IP地址授权
[[email protected] ~]# /usr/local/mysql/bin/mysql Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 2 Server version: 5.5.36-MariaDB-log MariaDB Server Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others. Type ‘help;‘ or ‘\h‘ for help. Type ‘\c‘ to clear the current input statement. MariaDB [(none)]> grant all on *.* to [email protected]"172.16.16.%" identified by "123456"; Query OK, 0 rows affected (0.12 sec)
器其他主机上连接数据库测试
可以看到在stu16(IP:172.16.16.1)主机上连接测试可以,查询数据,创建数据.
status下面出现提示错误时可以先进入resource模式cleanup下相应的服务.如果想要在configure模式下edit.需要先停掉相应的服务,同样进入resource模式stop相应的服务,在进行cleanup.接着就可以编辑保存了.
注意:
KILL掉服务后,服务是不会自动重启的。因为节点没有故障,所以资源不会转移,默认情况下,pacemaker不会对任何资源进行监控。所以,即便是资源关掉了,只要节点没有故障,资源依然不会转移。要想达到资源转移的目的,得定义监控(monitor)。
OK 我们的mysql高可以集群就写到这里.