要求:
一、能够在同一网段内直接通信
二、节点名称,要和uname的结果一样,并保证可以根据节点名称解析到节点的IP地址,配置本地/etc/hosts
三、SSH互信通信
四、保证时间同步
环境准备配置:
test1,192.168.10.55配置
1、配置IP地址
[[email protected] ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth0
2、配置主机名
[[email protected] ~]# uname -n [[email protected] ~]# hostname master1.local#临时生效 [[email protected] ~]# vim /etc/sysconfig/network#永久生效
3、配置主机名解析
[[email protected] ~]# vim /etc/hosts 添加: 192.168.10.55master1.local 192.168.10.56master2.local
3.2、测试主机名通信
[[email protected] ~]# ping master1.local [[email protected] ~]# ping master2.local
4、配置SSH互信认证
[[email protected] ~]# ssh-keygen -t rsa -P ‘‘ [[email protected] ~]# ssh-copy-id -i .ssh/id_rsa.pub [email protected]
5、使用ntp同步时间
在crontab中加入每5分钟执行一次ntpdate命令,用来保证服务器时间是同步的
[[email protected] ~]# crontab -e */5 * * * * /sbin/ntpdate 192.168.10.1 &> /dev/null
test2,192.168.10.56配置
1、配置IP地址
[[email protected] ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth0
2、配置主机名
[[email protected] ~]# uname -n [[email protected] ~]# hostname test2.local#临时生效 [[email protected] ~]# vim /etc/sysconfig/network#永久生效
3、配置主机名解析
[[email protected] ~]# vim /etc/hosts 添加: 192.168.10.55test1.localtest1 192.168.10.56test2.localtest2
3.2、测试主机名通信
[[email protected] ~]# ping test1.local [[email protected] ~]# ping test1
4、配置SSH互信认证
[[email protected] ~]# ssh-keygen -t rsa -P ‘‘ [[email protected] ~]# ssh-copy-id -i .ssh/id_rsa.pub [email protected]
5、使用ntp同步时间
在crontab中加入每5分钟执行一次ntpdate命令,用来保证服务器时间是同步的
[[email protected] ~]# crontab -e */5 * * * * /sbin/ntpdate 192.168.10.1 &> /dev/null
安装配置heartbeat
CentOS直接yum安装报错,提示找不到可用的软件包
解决办法:
[[email protected] src]# wget http://mirrors.sohu.com/fedora-epel/6/i386/epel-release-6-8.noarch.rpm [[email protected] src]# rpm -ivh epel-release-6-8.noarch.rpm
6.1、安装heartbeat:
[[email protected] src]# yum install heartbeat
6.2、copy配置文件:
[[email protected] src]# cp /usr/share/doc/heartbeat-3.0.4/{ha.cf,authkeys,haresources} /etc/ha.d/
6.3、配置认证文件:
[[email protected] src]# dd if=/dev/random count=1 bs=512 |md5sum #生成随机数 [[email protected] src]# vim /etc/ha.d/authkeys auth 1 1 md5 d0f70c79eeca5293902aiamheartbeat [[email protected] src]# chmod 600 authkeys
test2节点的heartbeat安装和test1一样,此处略过。
6.4、heartbeat主配置文件参数:
[[email protected] ~]# vim /etc/ha.d/ha.cf #debugfile /var/log/ha-debug #排错日志 logfile #日志位置 keepalive 2 #多长时间发送一次心跳检测,默认2秒,可以使用ms deadtime 30 #多长时间检测不到主机就认为挂掉 warntime 10 #如果没有收到心跳信息,那么在等待多长时间就认为对方挂掉 initdead 120 #第一个节点起来后,等待其他节点的时间 baud 19200 #串行线缆的发送速率是多少 auto_failback on #故障恢复后是否转移回来 ping 10.10.10.254 #ping node,万一节点主机不通,要ping哪个主机 ping_group group1 10.10.10.254 10.10.10.253 #ping node group,只要组内有一台主机能ping通就可以 respawn hacluster /usr/lib/heartbeat/ipfail #当一个heartbeat服务停止了,会重启对端的heartbeat服务 deadping 30 #ping nodes多长时间ping不通,就真的故障了 # serial serialportname ... #串行设备是什么 serial /dev/ttyS0 # Linux serial /dev/cuaa0 # FreeBSD serial /dev/cuad0 # FreeBSD 6.x serial /dev/cua/a # Solaris # What interfaces to broadcast heartbeats over? #如果使用以太网,定义使用单播、组播还是广播发送心跳信息 bcast eth0 #广播 mcast eth0 225.0.0.1 694 1 0 #组播 ucast eth0 192.168.1.2 #单播,只有两个节点的时候才用单播 #定义stonith主机 stonith_host * baytech 10.0.0.3 mylogin mysecretpassword stonith_host ken3 rps10 /dev/ttyS1 kathy 0 stonith_host kathy rps10 /dev/ttyS1 ken3 0 # Tell what machines are in the cluster #告诉集群中有多少个节点,每一个节点用node和主机名写一行,主机名要和uname -n保持一致 node ken3 node kathy 一般只要定义心跳信息的发送方式、和集群中的节点就行。 bcasteth0 nodetest1.local nodetest2.local
6.5、定义haresources资源配置文件:
[[email protected] ~]# vim /etc/ha.d/haresources #node110.0.0.170Filesystem::/dev/sda1::/data1::ext2#默认用作主节点的主机名,要跟uname -n一样。VIP是多少。自动挂载哪个设备,到哪个目录下,文件类型是什么。资源类型的参数要用双冒号隔开 #just.linux-ha.org135.9.216.110http#和上面一样,这里使用的资源是在/etc/rc.d/init.d/下面的,默认先到/etc/ha.d/resource.d/目录下找资源,找不到在到/etc/rc.d/init.d/目录找 master1.localIPaddr::192.168.10.2/24/eth0 mysqld master1.localIPaddr::192.168.10.2/24/eth0 drbddisk::data Filesystem::/dev/drbd1::/data::ext3mysqld#使用IPaddr脚本来配置VIP
6.6、拷贝master1.local的配置文件到master2.local上
[[email protected] ~]# scp -p ha.cf haresources authkeys master2.local:/etc/ha.d/
7、启动heartbeat
[[email protected] ~]# service heartbeat start [[email protected] ~]# ssh master2.local ‘service heartbeat start‘#一定要在test1上通过ssh的方式启动test2节点的heartbeat
7.1、查看heartbeat启动日志
[[email protected] ~]# tail -f /var/log/messages Feb 16 15:12:45 test-1 heartbeat: [16056]: info: Configuration validated. Starting heartbeat 3.0.4 Feb 16 15:12:45 test-1 heartbeat: [16057]: info: heartbeat: version 3.0.4 Feb 16 15:12:45 test-1 heartbeat: [16057]: info: Heartbeat generation: 1455603909 Feb 16 15:12:45 test-1 heartbeat: [16057]: info: glib: UDP Broadcast heartbeat started on port 694 (694) interface eth0 Feb 16 15:12:45 test-1 heartbeat: [16057]: info: glib: UDP Broadcast heartbeat closed on port 694 interface eth0 - Status: 1 Feb 16 15:12:45 test-1 heartbeat: [16057]: info: glib: ping heartbeat started. Feb 16 15:12:45 test-1 heartbeat: [16057]: info: G_main_add_TriggerHandler: Added signal manual handler Feb 16 15:12:45 test-1 heartbeat: [16057]: info: G_main_add_TriggerHandler: Added signal manual handler Feb 16 15:12:45 test-1 heartbeat: [16057]: info: G_main_add_SignalHandler: Added signal handler for signal 17 Feb 16 15:12:45 test-1 heartbeat: [16057]: info: Local status now set to: ‘up‘ Feb 16 15:12:45 test-1 heartbeat: [16057]: info: Link 192.168.10.1:192.168.10.1 up. Feb 16 15:12:45 test-1 heartbeat: [16057]: info: Status update for node 192.168.10.1: status ping Feb 16 15:12:45 test-1 heartbeat: [16057]: info: Link test1.local:eth0 up. Feb 16 15:12:51 test-1 heartbeat: [16057]: info: Link test2.local:eth0 up. Feb 16 15:12:51 test-1 heartbeat: [16057]: info: Status update for node test2.local: status up Feb 16 15:12:51 test-1 harc(default)[16068]: info: Running /etc/ha.d//rc.d/status status Feb 16 15:12:52 test-1 heartbeat: [16057]: WARN: 1 lost packet(s) for [test2.local] [3:5] Feb 16 15:12:52 test-1 heartbeat: [16057]: info: No pkts missing from test2.local! Feb 16 15:12:52 test-1 heartbeat: [16057]: info: Comm_now_up(): updating status to active Feb 16 15:12:52 test-1 heartbeat: [16057]: info: Local status now set to: ‘active‘ Feb 16 15:12:52 test-1 heartbeat: [16057]: info: Status update for node test2.local: status active Feb 16 15:12:52 test-1 harc(default)[16086]: info: Running /etc/ha.d//rc.d/status status Feb 16 15:13:02 test-1 heartbeat: [16057]: info: local resource transition completed. Feb 16 15:13:02 test-1 heartbeat: [16057]: info: Initial resource acquisition complete (T_RESOURCES(us)) Feb 16 15:13:02 test-1 heartbeat: [16057]: info: remote resource transition completed. Feb 16 15:13:02 test-1 /usr/lib/ocf/resource.d//heartbeat/IPaddr(IPaddr_192.168.10.2)[16138]: INFO: Resource is stopped Feb 16 15:13:02 test-1 heartbeat: [16102]: info: Local Resource acquisition completed. Feb 16 15:13:02 test-1 harc(default)[16219]: info: Running /etc/ha.d//rc.d/ip-request-resp ip-request-resp Feb 16 15:13:02 test-1 ip-request-resp(default)[16219]: received ip-request-resp IPaddr::192.168.10.2/24/eth0 OK yes Feb 16 15:13:02 test-1 ResourceManager(default)[16238]: info: Acquiring resource group: test1.local IPaddr::192.168.10.2/24/eth0 mysqld Feb 16 15:13:02 test-1 /usr/lib/ocf/resource.d//heartbeat/IPaddr(IPaddr_192.168.10.2)[16264]: INFO: Resource is stopped Feb 16 15:13:03 test-1 ResourceManager(default)[16238]: info: Running /etc/ha.d/resource.d/IPaddr 192.168.10.2/24/eth0 start Feb 16 15:13:03 test-1 IPaddr(IPaddr_192.168.10.2)[16386]: INFO: Adding inet address 192.168.10.2/24 with broadcast address 192.168.10.255 to device eth0 Feb 16 15:13:03 test-1 IPaddr(IPaddr_192.168.10.2)[16386]: INFO: Bringing device eth0 up Feb 16 15:13:03 test-1 IPaddr(IPaddr_192.168.10.2)[16386]: INFO: /usr/libexec/heartbeat/send_arp -i 200 -r 5 -p /var/run/resource-agents/send_arp-192.168.10.2 eth0 192.168.10.2 auto not_used not_used Feb 16 15:13:03 test-1 /usr/lib/ocf/resource.d//heartbeat/IPaddr(IPaddr_192.168.10.2)[16360]: INFO: Success Feb 16 15:13:03 test-1 ResourceManager(default)[16238]: info: Running /etc/init.d/mysqld start Feb 16 15:13:04 test-1 ntpd[1605]: Listen normally on 15 eth0 192.168.10.2 UDP 123
说明:
1、Link test1.local:eth0 up、Link test2.local:eth0 up #两个节点连接成功并为UP状态。
2、Link 192.168.10.1:192.168.10.1 up #ping节点的IP也已经启动
3、info: Running /etc/init.d/mysqld start #mysql启动成功
4、Listen normally on 15 eth0 192.168.10.2 UDP 123 #VIP启动成功
7.2、查看heartbeat的VIP
[[email protected] ha.d]# ip add |grep "10.2" inet 192.168.10.55/24 brd 192.168.10.255 scope global eth0 inet 192.168.10.2/24 brd 192.168.10.255 scope global secondary eth0
[[email protected] ha.d]# ip add |grep "10.2" inet 192.168.10.56/24 brd 192.168.10.255 scope global eth0
注:可以看到现在VIP是在master1.local主机上。而master2.local上没有VIP
8、测试效果
8.1、正常情况下连接mysql
[[email protected] ha.d]# mysql -uroot -h‘192.168.10.2‘ -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 2 Server version: 5.5.44 Source distribution Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type ‘help;‘ or ‘\h‘ for help. Type ‘\c‘ to clear the current input statement. mysql> show variables like ‘server_id‘; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | server_id | 1 | +---------------+-------+ 1 row in set (0.00 sec) mysql>
8.2、关闭master1.local上的heartbeat
[[email protected] ha.d]# service heartbeat stop Stopping High-Availability services: Done. [[email protected] ha.d]# ip add |grep "192.168.10.2" inet 192.168.10.55/24 brd 192.168.10.255 scope global eth0 [[email protected] ha.d]# ip add |grep "192.168.10.2" inet 192.168.10.56/24 brd 192.168.10.255 scope global eth0 inet 192.168.10.2/24 brd 192.168.10.255 scope global secondary eth0
注:这个时候VIP已经漂移到了master2.local主机上,我们在来看看连接mysql的server_id
[[email protected] ha.d]# mysql -uroot -h‘192.168.10.2‘ -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 2 Server version: 5.5.44 Source distribution Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type ‘help;‘ or ‘\h‘ for help. Type ‘\c‘ to clear the current input statement. mysql> show variables like ‘server_id‘; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | server_id | 2 | +---------------+-------+ 1 row in set (0.00 sec) mysql>
注:server_id已经从1变成了2,证明此时访问的是master2.local主机上的mysql服务
测试完毕。下面配置drbd让两台mysql服务器之间使用同一个文件系统,以实现mysql的写高可用。
9、配置DRBD
DRBD:(Distributed Replicated Block Device)分布式复制块设备,是linux内核中的一个模块。DRBD作为磁盘镜像来讲,它一定是主从架构的,它决不允许两个节点同时读写,仅允许一个节点能读写,从节点不能读写和挂载,
但是DRDB有双主的概念,主、从的角色可以切换。DRBD分别将位于两台主机上的硬盘或磁盘分区做成镜像设备,当我们客户端的程序向主节点发起存储请求的时候,这个数据会在底层以TCP/IP协议按位同布一份到备节点,
所以这能保证只要我们在主节点上存的数据,备节点上在按位一定有一模一样的一份数据。这是在两台主机上实现的,这意味着DRBD是工作在内核模块当中。不像RAID1的镜像是在同一台主机上实现的。
DRBD双主模型的实现:一个节点在数据访问的时候,它一定会将数据、元数据载入内存的,而且它对于某个文件内核中加锁的操作,另一个节点的内核是看不到的,那如果它能将它自己施加的锁通知给另一个节点的内核就可以了。
在这种情况下,我们就只能通过message layer(heartbeat、corosync都可)、pathmaker(把DRBD定义成资源),然后把这两个主机上对应的镜像格式化成集群文件系统(GFS2/OCFS2)。
这就是基于结合分布式文件锁管理器(DLM Distributed Lock Manager)以及集群文件系统所完成的双主模型。DRBD集群只允许有两个节点,要么双主,要么主从。
9.1、DRBD的三种工作模型
A、数据在本地DRBD存储完成后向应用程序返回存储成功的消息,异步模型。效率高,性能好。数据不安全
B、数据在本地DRBD存储完成后并且通过TCP/IP把所有数据发送到从DRBD中,才向本地的应用程序返回存储成功的消息,半同步模型。一般不用。
C、数据在本地DRBD存储完成后,通过TCP/IP把所有数据发送到从DRBD中,从DRBD存储完成后才向应用程序返回成功的消息,同步模型。效率低,性能若,但是数据安全可靠性大,用的最多。
9.2、DRBD的资源
1、资源名称,可以是任意的ascii码不包含空格的字符
2、DRBD设备,在双方节点上,此DRBD设备的设备文件,一般为/dev/drbdN,其主设备号相同都是147,此设备号用来标识不通的设备
3、磁盘配置,在双方节点上,各自提供的存储设备,可以是个分区,可以是任何类型的块设备,也可以是lvm
4、网络配置,双方数据同步时,所使用的网络属性
9.3、安装DRBD
drbd在2.6.33开始,才整合进内核的。
9.3.1、下载drbd
[[email protected] ~]# wget -O /usr/local/src http://oss.linbit.com/drbd/8.4/drbd-8.4.3.tar.gz
9.3.2、安装drbd软件
[[email protected] ~]# cd /usr/local/src [[email protected] src]# tar -zxvf drbd-8.4.3.tar.gz [[email protected] src]# cd /usr/local/src/drbd-8.4.3 [[email protected] drbd-8.4.3]# ./configure --prefix=/usr/local/drbd --with-km [[email protected] drbd-8.4.3]# make KDIR=/usr/src/kernels/2.6.32-573.18.1.el6.x86_64
[[email protected] drbd-8.4.3]# make install [[email protected] drbd-8.4.3]# mkdir -p /usr/local/drbd/var/run/drbd [[email protected] drbd-8.4.3]# cp /usr/local/drbd/etc/rc.d/init.d/drbd /etc/rc.d/init.d/
9.3.3、安装drbd模块
[[email protected] drbd-8.4.3]# cd drbd/ [[email protected] drbd]# make clean [[email protected] drbd]# make KDIR=/usr/src/kernels/2.6.32-573.18.1.el6.x86_64 [[email protected] drbd]# cp drbd.ko /lib/modules/`uname -r`/kernel/lib/ [[email protected] drbd]# modprobe drbd [[email protected] drbd]# lsmod | grep drbd
9.3.4、为drbd创建新分区
[[email protected] drbd]# fdisk /dev/sdb WARNING: DOS-compatible mode is deprecated. It‘s strongly recommended to switch off the mode (command ‘c‘) and change display units to sectors (command ‘u‘). Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-1305, default 1): Using default value 1 Last cylinder, +cylinders or +size{K,M,G} (1-1305, default 1305): +9G Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. AWRNING: Re-reading the partition table failed with error 16: Device or resource busy. The kernel still uses the old table. The new table will be used at the next reboot. Syncing disks. [[email protected] drbd]# partprobe /dev/sdb
test2节点的drbd安装和分区配置步骤略过,和test1上一样安装,test2节点的drbd配置文件保证和test1节点一样,使用scp传到test2节点即可
10、配置drbd
10.1、配置drbd的通用配置文件
[[email protected] drbd.d]# cd /usr/local/drbd/etc/drbd.d [[email protected] drbd.d]# vim global_common.conf global { #global是全局配置 usage-count no; #官方用来统计有多少个客户使用drbd的 # minor-count dialog-refresh disable-ip-verification } common { #通用配置,用来配置所有资源那些相同属性的。为drbd提供默认属性的 protocol C; #默认使用协议C,即同步模型。 handlers { #处理器段,用来配置drbd的故障切换操作 # These are EXAMPLE handlers only. # They may have severe implications, # like hard resetting the node under certain circumstances. # Be careful when chosing your poison. pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";# pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; #脑裂之后的操作 local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f"; #本地i/o错误之后的操作 # fence-peer "/usr/lib/drbd/crm-fence-peer.sh"; # split-brain "/usr/lib/drbd/notify-split-brain.sh root"; # out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root"; # before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k"; # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh; } startup { # wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb #设备启动时,两个节点要同步,设置节点的等待时间,超时时间等 } options { # cpu-mask on-no-data-accessible } disk { on-io-error detach; #一旦发生i/o错误,就把磁盘卸载。不继续进行同步 # size max-bio-bvecs on-io-error fencing disk-barrier disk-flushes # disk-drain md-flushes resync-rate resync-after al-extents # c-plan-ahead c-delay-target c-fill-target c-max-rate # c-min-rate disk-timeout } net { #设置网络的buffers/cache大小,初始化时间等 # protocol timeout max-epoch-size max-buffers unplug-watermark # connect-int ping-int sndbuf-size rcvbuf-size ko-count # allow-two-primaries cram-hmac-alg shared-secret after-sb-0pri # after-sb-1pri after-sb-2pri always-asbp rr-conflict # ping-timeout data-integrity-alg tcp-cork on-congestion # congestion-fill congestion-extents csums-alg verify-alg # use-rle cram-hmac-alg "sha1"; #数据加密使用的算法 shared-secret "mydrbd1fa2jg8"; #验证密码 } syncer { rate 200M; #定义数据传输速率 } }
10.2、配置资源文件,资源配置文件的名字要和资源文件中的一样
[[email protected] drbd.d]# vim mydrbd.res resource mydrbd { #资源名称,可以是任意的ascii码不包含空格的字符 on test1.local { #节点1,每个节点必须要能使用名称的方式解析对方节点 device /dev/drbd0; #drbd设备的文件名叫什么 disk /dev/sdb1; #分区设备是哪个 address 192.168.10.55:7789;#节点ip和监听的端口 meta-disk internal; #drbd的meta(原数据)放在什么地方,internal是放在设备内部 } on test2.local { device /dev/drbd0; disk /dev/sdb1; address 192.168.10.56:7789; meta-disk internal; } }
10.3、两个节点的配置文件一样,使用工具把配置文件传到另一个节点
[[email protected] drbd.d]# scp -r /usr/local/drbd/etc/drbd.* test2.local:/usr/local/drbd/etc/
10.4、在每个节点上初始化已定义的资源
[[email protected] drbd.d]# drbdadm create-md mydrbd --== Thank you for participating in the global usage survey ==-- The server‘s response is: Writing meta data... initializing activity log NOT initialized bitmap New drbd meta data block successfully created. [[email protected] drbd.d]# [[email protected] drbd.d]# drbdadm create-md mydrbd --== Thank you for participating in the global usage survey ==-- The server‘s response is: Writing meta data... initializing activity log NOT initialized bitmap New drbd meta data block successfully created. [[email protected] drbd.d]#
10.5、分别启动两个节点的drbd服务
[[email protected] drbd.d]# service drbd start [[email protected] drbd.d]# service drbd start
11、测试drbd的同步
11.1、查看drbd的启动状态
[[email protected] drbd.d]# cat /proc/drbd version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by [email protected], 2016-02-23 10:23:03 0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r----- #两个节点都是从,将来可以把一个提升为主。Inconsistent处于没有同步的状态 ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
11.2、提升一个节点为主,并覆盖从节点的drbd分区数据。在要提升为主的节点上执行
[[email protected] drbd.d]# drbdadm -- --overwrite-data-of-peer primary mydrbd
11.3、查看主节点同步状态
[[email protected] drbd.d]# watch -n 1 cat /proc/drbd Every 1.0s: cat /proc/drbd Tue Feb 23 17:10:55 2016 version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by [email protected], 2016-02-23 10:23:03 0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r---n- ns:619656 nr:0 dw:0 dr:627840 al:0 bm:37 lo:1 pe:8 ua:64 ap:0 ep:1 wo:b oos:369144 [=============>.......] sync‘ed: 10.3% (369144/987896)K finish: 0:00:12 speed: 25,632 (25,464) K/sec
11.4、查看从节点的状态
[[email protected] drbd]# cat /proc/drbd version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by [email protected], 2016-02-22 16:05:34 0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r----- ns:4 nr:9728024 dw:9728028 dr:1025 al:1 bm:577 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
11.5、在主节点格式化分区并挂在写入数据测试
[[email protected] drbd]# mke2fs -j /dev/drbd0 [[email protected] drbd]# mkdir /mydrbd [[email protected] drbd]# mount /dev/drdb0 /mydrbd [[email protected] drbd]# cd /mydrbd [[email protected] mydrbd]# touch drbd_test_file [[email protected] mydrbd]# ls /mydrbd/ drbd_test_file lost+found
11.6、把主节点降级为从,把从节点提升为主。查看数据是否同步
11.1、主节点操作
11.1.1、卸载分区,注意卸载的时候要退出挂在目录,否则会显示设备忙,不能卸载
[[email protected] mydrbd]# cd ~ [[email protected] ~]# umount /mydrbd [[email protected] ~]# drbdadm secondary mydrbd
11.1.2、查看现在的drbd状态
[[email protected] ~]# cat /proc/drbd version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by [email protected], 2016-02-22 16:05:34 0: cs:Connected ro:Secondary/Secondary ds:UpToDate/UpToDate C r----- ns:4 nr:9728024 dw:9728028 dr:1025 al:1 bm:577 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
注:可以看到,现在drbd的两个节点的状态都是secondary的,下面把从节点提升为主
11.2、从节点操作
11.2.1、提升操作
[[email protected] ~]# drdbadm primary mydrbd
11.2.2、挂在drbd分区
[[email protected] ~]# mkdir /mydrbd [[email protected] ~]# mount /dev/drbd0 /mydrbd/
11.2.3、查看是否有数据
[[email protected] ~]# ls /myddrbd/ drbd_test_file lost+found
注:可以看到从节点切换成主后,已经同步了数据。drbd搭建完成。下面结合corosync+mysql配置双主高可用。
12、结合corosync+drbd+mysql实现数据库双主高可用
将drbd配置为corosync双节点高可用集群中的资源,能够实现主从角色的自动切换,注意,要把某一个服务配置为高可用集群的资源,一定不能让这个服务开机自动启动。
12.1、关闭两台节点的drbd开机自启动
12.1.1、主节点操作
[[email protected] drbd.d]# chkconfig drbd off [[email protected] drbd.d]# chkconfig --list |grep drbd drbd 0:off 1:off 2:off 3:off 4:off 5:off 6:off
12.1.2、从节点操作
[[email protected] drbd.d]# chkconfig drbd off [[email protected] drbd.d]# chkconfig --list |grep drbd drbd 0:off 1:off 2:off 3:off 4:off 5:off 6:off
12.2、卸载drbd的文件系统并把主节点降级为从节点
12.2.1、从节点操作,注意,这里的从节点刚才提升为主了。现在把他降级
[[email protected] drbd]# umount /mydata/ [[email protected] drbd]# drbdadm secondary mydrbd [[email protected] drbd]# cat /proc/drbd version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by [email protected], 2016-02-22 16:05:34 0: cs:Connected ro:Secondary/Secondary ds:UpToDate/UpToDate C r----- ns:8 nr:9728024 dw:9728032 dr:1073 al:1 bm:577 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
注:确保两个节点都是secondary
12.3、停止两个节点的drbd服务
12.3.1、从节点操作
[[email protected] drbd]# service drbd stop Stopping all DRBD resources: . [[email protected] drbd]#
12.3.2、主节点操作
[[email protected] drbd.d]# service drbd stop Stopping all DRBD resources: . [[email protected] drbd.d]#
12.4、安装corosync并创建日志目录
12.4.1、主节点操作
[[email protected] drbd.d]# wget -P /etc/yum.repos.d http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/network:ha-clustering:Stable.repo [[email protected] drbd.d]# yum install corosync pacemaker crmsh [[email protected] drbd.d]# mkdir /var/log/cluster
12.4.2、从节点操作
[[email protected] drbd.d]# wget -P /etc/yum.repos.d http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/network:ha-clustering:Stable.repo [[email protected] drbd.d]# mkdir /var/log/cluster [[email protected] drbd.d]# yum install corosync pacemaker crmsh
12.5、corosync配置文件
12.5.1、主节点操作
[[email protected] drbd.d]# cd /etc/corosync/ [[email protected] corosync]# cp corosync.conf.example corosync.conf
12.6、配置主节点配置文件,生成corosync秘钥文件并复制给从节点(包括主配置文件)
12.6.1、主节点操作
[[email protected] corosync]# vim corosync.conf # Please read the corosync.conf.5 manual page compatibility: whitetank totem { version: 2# secauth: Enable mutual node authentication. If you choose to # enable this ("on"), then do remember to create a shared # secret with "corosync-keygen". secauth: on threads: 2 # interface: define at least one interface to communicate # over. If you define more than one interface stanza, you must # also set rrp_mode. interface { # Rings must be consecutively numbered, starting at 0. ringnumber: 0 # This is normally the *network* address of the # interface to bind to. This ensures that you can use # identical instances of this configuration file # across all your cluster nodes, without having to # modify this option. bindnetaddr: 192.168.10.0 # However, if you have multiple physical network # interfaces configured for the same subnet, then the # network address alone is not sufficient to identify # the interface Corosync should bind to. In that case, # configure the *host* address of the interface # instead: bindnetaddr: 192.168.10.0 # When selecting a multicast address, consider RFC # 2365 (which, among other things, specifies that # 239.255.x.x addresses are left to the discretion of # the network administrator). Do not reuse multicast # addresses across multiple Corosync clusters sharing # the same network. mcastaddr: 239.212.16.19 # Corosync uses the port you specify here for UDP # messaging, and also the immediately preceding # port. Thus if you set this to 5405, Corosync sends # messages over UDP ports 5405 and 5404. mcastport: 5405 # Time-to-live for cluster communication packets. The # number of hops (routers) that this ring will allow # itself to pass. Note that multicast routing must be # specifically enabled on most network routers. ttl: 1 #每一个数据报文不允许经过路由 } } logging { # Log the source file and line where messages are being # generated. When in doubt, leave off. Potentially useful for # debugging. fileline: off # Log to standard error. When in doubt, set to no. Useful when # running in the foreground (when invoking "corosync -f") to_stderr: no # Log to a log file. When set to "no", the "logfile" option # must not be set. to_logfile: yes logfile: /var/log/cluster/corosync.log # Log to the system log daemon. When in doubt, set to yes. to_syslog: no # Log debug messages (very verbose). When in doubt, leave off. debug: off # Log messages with time stamps. When in doubt, set to on # (unless you are only logging to syslog, where double # timestamps can be annoying). timestamp: on logger_subsys { subsys: AMF debug: off } } service { ver:0 name:pacemaker } aisexec { user:root group:root } [[email protected] corosync]# corosync-keygen [[email protected] corosync]# scp -p authkey corosync.conf test2.local:/etc/corosync/
12.7、启动corosync
12.7.1、主节点操作(注意:两个corosync的启动操作都要在主节点上进行)
[[email protected] corosync]# service corosync start Starting Corosync Cluster Engine (corosync): [ OK ] [[email protected] corosync]# ssh test2.local ‘service corosync start‘ [email protected]‘s password: Starting Corosync Cluster Engine (corosync): [ OK ]
12.8、查看集群状态
12.8.1、问题:安装之后系统没有crm命令,不能使用crm的交互式模式
[[email protected] corosync]# crm status -bash: crm: command not found
解决办法:安装ha-cluster的yum源,在安装crmsh软件包
[[email protected] corosync]# wget -P /etc/yum.repos.d/ http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/network:ha-clustering:Stable.repo
参考文档:http://www.dwhd.org/20150530_014731.html 中的第二,安装crmsh步骤
安装完成后就可以使用crm命令行模式了
12.8.2、查看集群节点状态
[[email protected] corosync]# crm status Last updated: Wed Feb 24 13:47:17 2016 Last change: Wed Feb 24 11:26:06 2016 Stack: classic openais (with plugin) Current DC: test2.local - partition with quorum Version: 1.1.11-97629de 2 Nodes configured, 2 expected votes 0 Resources configured Online: [ test1.local test2.local ] Full list of resources:
注:显示有2个nodes已配置并处于在线状态
12.8.3、查看pacemaker是否启动
[[email protected] corosync]# grep pcmk_startup /var/log/cluster/corosync.log Feb 24 11:05:15 corosync [pcmk ] info: pcmk_startup: CRM: Initialized Feb 24 11:05:15 corosync [pcmk ] Logging: Initialized pcmk_startup
12.8.4、检查集群引擎是否启动
[[email protected] corosync]# grep -e "Corosync Cluster Engine" -e "configuration file" /var/log/cluster/corosync.log Feb 24 11:04:16 corosync [MAIN ] Corosync Cluster Engine (‘1.4.7‘): started and ready to provide service. Feb 24 11:04:16 corosync [MAIN ] Successfully read main configuration file ‘/etc/corosync/corosync.conf‘.
注:经过上面的步骤,可以确认corosync的服务已经启动并没有问题了
12.9、配置corosync的属性
12.9.1、禁用STONITH设备以保证verify不会出错
[[email protected] corosync]# crm configure crm(live)configure# property stonith-enabled=false crm(live)configure# verify crm(live)configure# commit
12.9.2、配置当不具备法定票数的时候不能关闭服务
crm(live)configure# property no-quorum-policy=ignore crm(live)configure# verify crm(live)configure# commit
12.9.3、配置资源默认粘性值
crm(live)configure# rsc_defaults resource-stickiness=100 crm(live)configure# verify crm(live)configure# commit
12.9.3、查看当前的配置
crm(live)configure# show node test1.local node test2.local property cib-bootstrap-options: dc-version=1.1.11-97629de cluster-infrastructure="classic openais (with plugin)" expected-quorum-votes=2 stonith-enabled=false no-quorum-policy=ignore rsc_defaults rsc-options: resource-stickiness=100
12.10、配置集群资源
12.10.1、定义drbd的资源
crm(live)configure# primitive mysqldrbd ocf:linbit:drbd params drbd_resource=mydrbd op start timeout=240 op stop timeout=100 op monitor role=Master interval=10 timeout=20 op monitor role=Slave interval=20 timeout=20
注:ocf:资源代理,这里用linbit 代理drbd。 drbd_resource=mydrbd:drbd的资源名字。start timeout:启动超时时间。stop timeout:停止超时时间。monitor role=Master:定义主节点的监控时间,interval:监控间隔,timeout:超时时间。monitor role=Slave:定义从节点的监控时间,interval:监控间隔,timeout:超时时间
crm(live)configure# show #查看配置 node test1.local node test2.local primitive mydrbd ocf:linbit:drbd params drbd_resource=mydrbd op start timeout=240 interval=0 op stop timeout=100 interval=0 op monitor role=Master interval=10s timeout=20s op monitor role=Slave interval=20s timeout=20s property cib-bootstrap-options: dc-version=1.1.11-97629de cluster-infrastructure="classic openais (with plugin)" expected-quorum-votes=2 stonith-enabled=false no-quorum-policy=ignore rsc_defaults rsc-options: resource-stickiness=100 crm(live)configure# verify #验证配置
12.10.2、定义drbd的主从资源
crm(live)configure# ms ms_mysqldrbd mysqldrbd meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true 注:ms ms_mydrbd mydrbd:把mydrbd做成主从,这个资源名称叫ms_mydrbd。meta:定义元数据属性。master-max=1:最多有1个主资源,master-node-max=1:最多有1个主节点,clone-max=2:最多有2个克隆,clone-node-max=1:每个节点上,可以启动几个克隆资源。
crm(live)configure# verify crm(live)configure# commit crm(live)configure# show node test1.local node test2.local primitive mydrbd ocf:linbit:drbd params drbd_resource=mydrbd op start timeout=240 interval=0 op stop timeout=100 interval=0 op monitor role=Master interval=10s timeout=20s op monitor role=Slave interval=20s timeout=20s ms ms_mydrbd mydrbd meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 property cib-bootstrap-options: dc-version=1.1.11-97629de cluster-infrastructure="classic openais (with plugin)" expected-quorum-votes=2 stonith-enabled=false no-quorum-policy=ignore rsc_defaults rsc-options: resource-stickiness=100
注:可以看到现在配置有一个primitive mydrbd的资源,一个ms的主从类型。
12.10.3、查看集群节点状态
crm(live)configure# cd crm(live)# status Last updated: Thu Feb 25 14:45:52 2016 Last change: Thu Feb 25 14:44:44 2016 Stack: classic openais (with plugin) Current DC: test1.local - partition with quorum Version: 1.1.11-97629de 2 Nodes configured, 2 expected votes 2 Resources configured Online: [ test1.local test2.local ] Full list of resources: Master/Slave Set: ms_mysqldrbd [mysqldrbd] Masters: [ test1.local ] Slaves: [ test2.local ]
注:正常状态,已经可以看到主从状态,test1是master。
错误状态:
Master/Slave Set: ms_mydrbd [mydrbd] mydrbd(ocf::linbit:drbd):FAILED test2.local (unmanaged) mydrbd(ocf::linbit:drbd):FAILED test1.local (unmanaged) Failed actions: mydrbd_stop_0 on test2.local ‘not configured‘ (6): call=87, status=complete, last-rc-change=‘Thu Feb 25 14:17:34 2016‘, queued=0ms, exec=34ms mydrbd_stop_0 on test2.local ‘not configured‘ (6): call=87, status=complete, last-rc-change=‘Thu Feb 25 14:17:34 2016‘, queued=0ms, exec=34ms mydrbd_stop_0 on test1.local ‘not configured‘ (6): call=72, status=complete, last-rc-change=‘Thu Feb 25 14:17:34 2016‘, queued=0ms, exec=34ms mydrbd_stop_0 on test1.local ‘not configured‘ (6): call=72, status=complete, last-rc-change=‘Thu Feb 25 14:17:34 2016‘, queued=0ms, exec=34ms 解决办法:定义资源的时候要注意,drbd_resource=mydrbd的名字是drbd资源的名字且主从资源的名称不能和drbd资源的名称一样,还有各种超时设置中不要加s。测试了很久,才找到这个问题。。。。如有不同看法,请各位大神赐教,谢谢。
12.10.4、验证主从的切换
[[email protected] ~]# crm node standby test1.local #将主节点下线 [[email protected] ~]# crm status 注:查看状态,显示主节点已经不在线,而test2成为了master Last updated: Thu Feb 25 14:51:58 2016 Last change: Thu Feb 25 14:51:44 2016 Stack: classic openais (with plugin) Current DC: test1.local - partition with quorum Version: 1.1.11-97629de 2 Nodes configured, 2 expected votes 2 Resources configured Node test1.local: standby Online: [ test2.local ] Full list of resources: Master/Slave Set: ms_mysqldrbd [mysqldrbd] Masters: [ test2.local ] Stopped: [ test1.local ] [[email protected] ~]# crm node online test1.local #重新将test1上线 [[email protected] ~]# crm status #查看状态,显示test依旧为master,而test1成为了slave Last updated: Thu Feb 25 14:52:55 2016 Last change: Thu Feb 25 14:52:39 2016 Stack: classic openais (with plugin) Current DC: test1.local - partition with quorum Version: 1.1.11-97629de 2 Nodes configured, 2 expected votes 2 Resources configured Online: [ test1.local test2.local ] Full list of resources: Master/Slave Set: ms_mysqldrbd [mysqldrbd] Masters: [ test2.local ] Slaves: [ test1.local ]
12.10.5、定义文件系统资源
[[email protected] ~]# crm crm(live)# configure crm(live)configure# primitive mystore ocf:heartbeat:Filesystem params device=/dev/drbd0 directory=/mydrbd fstype=ext3 op start timeout=60 op stop timeout=60 注:这里的语句表示,primitive mystore:定义一个资源mystore,使用heartbeat的文件系统,params参数定义:drbd的设备名,挂在目录,文件系统类型,和启动停止超时时间 crm(live)configure# verify
12.10.6、定义排列约束以确保Filesystem和主节点在一起。
crm(live)configure# colocation mystore_withms_mysqldrbd inf: mystore ms_mysqldrbd:Master crm(live)configure# verify
12.10.7、定义Order约束,以确保主从资源必须要先成为主节点以后才能挂在文件系统
crm(live)configure# order mystore_after_ms_mysqldrbd mandatory: ms_mysqldrbd:promote mystore:start 注:这里的语句表示,mystore_after_ms_mysqldrbd mandatory:,mystore在ms_mysqldrbd之后启动,mandatory(代表强制的),先启动ms_mysqldrbd,promote(角色切换成功后),在启动mystore:start crm(live)configure# verify crm(live)configure# commit crm(live)configure# cd crm(live)# status #查看状态,可以看到文件系统已经自动挂在到主节点test1.local上了。 Last updated: Thu Feb 25 15:29:39 2016 Last change: Thu Feb 25 15:29:36 2016 Stack: classic openais (with plugin) Current DC: test1.local - partition with quorum Version: 1.1.11-97629de 2 Nodes configured, 2 expected votes 3 Resources configured Online: [ test1.local test2.local ] Full list of resources: Master/Slave Set: ms_mysqldrbd [mysqldrbd] Masters: [ test1.local ] Slaves: [ test2.local ] mystore(ocf::heartbeat:Filesystem):Started test1.local [[email protected] ~]# ls /mydrbd/#已经有之前创建的文件了。 inittab lost+found
12.10.8、切换主从节点,验证文件系统是否会自动挂载
[[email protected] ~]# crm node standby test1.local [[email protected] ~]# crm node online test1.local [[email protected] ~]# crm status Last updated: Thu Feb 25 15:32:39 2016 Last change: Thu Feb 25 15:32:36 2016 Stack: classic openais (with plugin) Current DC: test1.local - partition with quorum Version: 1.1.11-97629de 2 Nodes configured, 2 expected votes 3 Resources configured Online: [ test1.local test2.local ] Full list of resources: Master/Slave Set: ms_mysqldrbd [mysqldrbd] Masters: [ test2.local ] Slaves: [ test1.local ] mystore(ocf::heartbeat:Filesystem):Started test2.local [[email protected] ~]# ls /mydrbd/ inittab lost+found
12.9.12、配置mysql
12.9.12.1、关闭mysqld的开机自启动
[[email protected] ~]# chkconfig mysqld off [[email protected] ~]# chkconfig mysqld off #一定要记住,只要是高可用集群中的资源的服务,一定不能开机自启动
12.9.12.2、配置主节点1的mysql服务(mysql安装就不写了,直接进入mysql的配置)
[[email protected] mysql]# mkdir /mydrbd/data [[email protected] mysql]# chown -R mysql.mysql /mydrbd/data/ [[email protected] mysql]# ./scripts/mysql_install_db --user=mysql --datadir=/mydrbd/data/ --basedir=/usr/local/mysql/ Installing MySQL system tables... 160225 16:07:12 [Note] /usr/local/mysql//bin/mysqld (mysqld 5.5.44) starting as process 18694 ... OK Filling help tables... 160225 16:07:18 [Note] /usr/local/mysql//bin/mysqld (mysqld 5.5.44) starting as process 18701 ... OK To start mysqld at boot time you have to copy support-files/mysql.server to the right place for your system PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER ! To do so, start the server, then issue the following commands: /usr/local/mysql//bin/mysqladmin -u root password ‘new-password‘ /usr/local/mysql//bin/mysqladmin -u root -h test1.local password ‘new-password‘ Alternatively you can run: /usr/local/mysql//bin/mysql_secure_installation which will also give you the option of removing the test databases and anonymous user created by default. This is strongly recommended for production servers. See the manual for more instructions. You can start the MySQL daemon with: cd /usr/local/mysql/ ; /usr/local/mysql//bin/mysqld_safe & You can test the MySQL daemon with mysql-test-run.pl cd /usr/local/mysql//mysql-test ; perl mysql-test-run.pl Please report any problems at [[email protected] mysql]# service mysqld start Starting MySQL..... [ OK ] [[email protected] mysql]# mysql -uroot Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 2 Server version: 5.5.44 Source distribution Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type ‘help;‘ or ‘\h‘ for help. Type ‘\c‘ to clear the current input statement. mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | test | +--------------------+ 4 rows in set (0.06 sec) mysql> create database drbd_mysql; Query OK, 1 row affected (0.00 sec) mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | drbd_mysql | | mysql | | performance_schema | | test | +--------------------+ 5 rows in set (0.00 sec) mysql>
12.9.12.3、配置从节点的mysql
注:因为刚才已经在test1.local上的共享存储上初始化了mysql的data目录,在test2.local上就不用重复初始化了。
[[email protected] mysql]# crm status Last updated: Thu Feb 25 16:14:14 2016 Last change: Thu Feb 25 15:35:16 2016 Stack: classic openais (with plugin) Current DC: test1.local - partition with quorum Version: 1.1.11-97629de 2 Nodes configured, 2 expected votes 3 Resources configured Online: [ test1.local test2.local ] Full list of resources: Master/Slave Set: ms_mysqldrbd [mysqldrbd] Masters: [ test1.local ] Slaves: [ test2.local ] mystore(ocf::heartbeat:Filesystem):Started test1.local
1、先让test2.local成为master才能继续操作
[[email protected] mysql]# crm node standby test1.local [[email protected] mysql]# crm node online test1.local [[email protected] mysql]# crm status Last updated: Thu Feb 25 16:14:46 2016 Last change: Thu Feb 25 16:14:30 2016 Stack: classic openais (with plugin) Current DC: test1.local - partition with quorum Version: 1.1.11-97629de 2 Nodes configured, 2 expected votes 3 Resources configured Node test1.local: standby Online: [ test2.local ] Full list of resources: Master/Slave Set: ms_mysqldrbd [mysqldrbd] Masters: [ test2.local ] Stopped: [ test1.local ] mystore(ocf::heartbeat:Filesystem):Started test2.local #要确保让test2.local成为master节点
2、test2.local的mysql配置
[[email protected] ~]# vim /etc/my.cnf 添加: datadir = /mydrbd/data [[email protected] ~]# service mysqld start Starting MySQL. [ OK ] [[email protected] ~]# mysql -uroot Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 1 Server version: 5.5.44 Source distribution Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type ‘help;‘ or ‘\h‘ for help. Type ‘\c‘ to clear the current input statement. mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | drbd_mysql | | mysql | | performance_schema | | test | +--------------------+ 5 rows in set (0.10 sec) mysql>
12.9.13、定义mysql资源
1、停止mysql
[[email protected] ~]# service mysqld stop Shutting down MySQL. [ OK ]
2、定义mysql资源
[[email protected] mysql]# crm configure crm(live)configure# primitive mysqld lsb:mysqld crm(live)configure# verify
3、定义mysql和主节点约束
crm(live)configure# colocation mysqld_with_mystore inf: mysqld mystore #注:因为mystore一定和主节点在一起,那么我们就定义mysql和mystore的约束。 crm(live)configure# verify
4、定义mysql和mystore启动次序约束
crm(live)configure# order mysqld_after_mystore mandatory: mystore mysqld #一定要弄清楚启动的先后次序,mysql是在mystore之后启动的。 crm(live)configure# verify crm(live)configure# commit crm(live)configure# cd crm(live)# status Last updated: Thu Feb 25 16:44:29 2016 Last change: Thu Feb 25 16:42:16 2016 Stack: classic openais (with plugin) Current DC: test1.local - partition with quorum Version: 1.1.11-97629de 2 Nodes configured, 2 expected votes 4 Resources configured Online: [ test1.local test2.local ] Full list of resources: Master/Slave Set: ms_mysqldrbd [mysqldrbd] Masters: [ test2.local ] Slaves: [ test1.local ] mystore(ocf::heartbeat:Filesystem):Started test2.local mysqld(lsb:mysqld):Started test2.local 注:现在主节点在test2.local上。
5、验证test2.local上的mysql登录是否正常和角色切换后是否正常
[[email protected] ~]# mysql -uroot Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 2 Server version: 5.5.44 Source distribution Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type ‘help;‘ or ‘\h‘ for help. Type ‘\c‘ to clear the current input statement. mysql> show databases; #test2.local上已经启动好了mysql并自动挂在了drbd资源 +--------------------+ | Database | +--------------------+ | information_schema | | drbd_mysql | | mysql | | performance_schema | | test | +--------------------+ 5 rows in set (0.07 sec) mysql> create database mydb; #创建一个数据库,然后切换到test1.local节点上,看是否正常 Query OK, 1 row affected (0.00 sec) mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | drbd_mysql | | mydb | | mysql | | performance_schema | | test | +--------------------+ 6 rows in set (0.00 sec) mysql> exit [[email protected] mysql]# crm node standby test2.local #将test2.local主节点standby,让test1.local自动成为master [[email protected] mysql]# crm node online test2.local [[email protected] mysql]# crm status Last updated: Thu Feb 25 16:53:24 2016 Last change: Thu Feb 25 16:53:19 2016 Stack: classic openais (with plugin) Current DC: test1.local - partition with quorum Version: 1.1.11-97629de 2 Nodes configured, 2 expected votes 4 Resources configured Online: [ test1.local test2.local ] Full list of resources: Master/Slave Set: ms_mysqldrbd [mysqldrbd] Masters: [ test1.local ] Slaves: [ test2.local ] mystore(ocf::heartbeat:Filesystem):Started test1.local #test1.local已经成为master mysqld(lsb:mysqld):Started test1.local [root[email protected] mysql]# mysql -uroot #在test1.local上登录mysql Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 1 Server version: 5.5.44 Source distribution Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type ‘help;‘ or ‘\h‘ for help. Type ‘\c‘ to clear the current input statement. mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | drbd_mysql | | mydb | #已经有刚才在test2.local上创建的mydb数据库。目前,一切正常 | mysql | | performance_schema | | test | +--------------------+ 6 rows in set (0.15 sec) mysql>
12.9.14、定义VIP资源及约束关系
crm(live)configure# primitive myip ocf:heartbeat:IPaddr params ip=192.168.10.3 nic=eth0 cidr_netmask=24 #这里出了一个错误,浪费半天时间,是因为子网掩码写的255.255.255.0,应该写成24,忘记命令的时候,要多使用table和help。 crm(live)configure# verify crm(live)configure# colocation myip_with_ms_mysqldrbd inf: ms_mysqldrbd:Master myip #定义vip和ms_mysqldrbd的约束关系 crm(live)configure# verify crm(live)configure# commit crm(live)configure# cd crm(live)# status#查看状态 Last updated: Fri Feb 26 10:05:16 2016 Last change: Fri Feb 26 10:05:12 2016 Stack: classic openais (with plugin) Current DC: test1.local - partition with quorum Version: 1.1.11-97629de 2 Nodes configured, 2 expected votes 5 Resources configured Online: [ test1.local test2.local ] Full list of resources: Master/Slave Set: ms_mysqldrbd [mysqldrbd] Masters: [ test1.local ] Slaves: [ test2.local ] mystore(ocf::heartbeat:Filesystem):Started test1.local mysqld(lsb:mysqld):Started test1.local myip(ocf::heartbeat:IPaddr):Started test1.local #vip已经启动在test1.local节点上了。
12.9.15、测试连接VIP
[[email protected] ~]# ip addr #查看test1.local上是否绑定VIP 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:34:7d:9f brd ff:ff:ff:ff:ff:ff inet 192.168.10.55/24 brd 192.168.10.255 scope global eth0 inet 192.168.10.3/24 brd 192.168.10.255 scope global secondary eth0 #VIP已经绑定到test1.local上的eth0接口上 inet6 fe80::20c:29ff:fe34:7d9f/64 scope link valid_lft forever preferred_lft forever [[email protected] ~]# mysql -uroot -h192.168.10.3 -p #使用VIP连接mysql,这里要给连接的客户端授权,不然不能登录 Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 6 Server version: 5.5.44 Source distribution Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type ‘help;‘ or ‘\h‘ for help. Type ‘\c‘ to clear the current input statement. mysql> show databases; #下面有我们刚才创建的两个库 +--------------------+ | Database | +--------------------+ | information_schema | | drbd_mysql | | mydb | | mysql | | performance_schema | | test | +--------------------+ 6 rows in set (0.08 sec)
模拟test1.local故障:
[[email protected] mysql]# crm node standby test1.local #将test1.local降级为slave,看vip是否会自动切换 [[email protected] mysql]# crm node online test1.local [[email protected] mysql]# crm status Last updated: Fri Feb 26 10:20:38 2016 Last change: Fri Feb 26 10:20:35 2016 Stack: classic openais (with plugin) Current DC: test1.local - partition with quorum Version: 1.1.11-97629de 2 Nodes configured, 2 expected votes 5 Resources configured Online: [ test1.local test2.local ] Full list of resources: Master/Slave Set: ms_mysqldrbd [mysqldrbd] Masters: [ test2.local ] Slaves: [ test1.local ] mystore(ocf::heartbeat:Filesystem):Started test2.local mysqld(lsb:mysqld):Started test2.local myip(ocf::heartbeat:IPaddr):Started test2.local #vip已经在test2.local节点上了。
查看test2的IP信息:
[[email protected] ~]# ip addr #在test2.local上查看VIP绑定信息 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:fd:7f:e5 brd ff:ff:ff:ff:ff:ff inet 192.168.10.56/24 brd 192.168.10.255 scope global eth0 inet 192.168.10.3/24 brd 192.168.10.255 scope global secondary eth0 #VIP已经在test2.local上的eth0接口上。 inet6 fe80::20c:29ff:fefd:7fe5/64 scope link valid_lft forever preferred_lft forever
测试连接MySQL:
[[email protected] ~]# mysql -uroot -h192.168.10.3 -p #连接mysql Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 1 Server version: 5.5.44 Source distribution Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type ‘help;‘ or ‘\h‘ for help. Type ‘\c‘ to clear the current input statement. mysql> show databases; #一切正常。 +--------------------+ | Database | +--------------------+ | information_schema | | drbd_mysql | | mydb | | mysql | | performance_schema | | test | +--------------------+ 6 rows in set (0.06 sec)
到此,corosync+drbd+mysql已经配置完毕,文档不够详细,没有corosync、heartbeat、drbd、mysql结合的原理讲解,没有发生脑裂后的处理办法和预防方法。以后有时间在加上吧。
这篇文档参照了http://litaotao.blog.51cto.com/6224470/1303307,
DRBD安装参照了:http://88fly.blog.163.com/blog/static/12268039020131113452222/