[转](多实例)mysql-mmm集群

一、需求说明

最近一直在学习mysql-mmm,想以后这个架构也能用在我们公司的业务上,我们公司的业务是单机多实例部署,所以也想把mysql-mmm部署成这样,功夫不负有心人,我成功了,和大家分享一下:

二、环境说明

集群 角色 主机名 IP Mysql Port Server ID VIP Writer VIP READER
navy2 Agent db1 172.28.26.101 3307 11 172.28.26.107 ?
navy3
Agent
db2 172.28.26.102 3308 1 ?
172.28.26.110
navy2
Agent
db1
172.28.26.101
3307 22 ?
172.28.26.108
navy3
Agent
db2
172.28.26.102
3308 2
172.28.26.109
?
navy2/navy3 Monitor
Monitor

172.28.26.103
? ? ? ?

PS:db1和db2上分别有两个库navy2和navy3,互为主主,172.28.26.107是navy2的写入虚拟IP,172.28.26.108是navy2的读虚拟IP,172.28.26.109是navy2的写入虚拟IP,172.28.26.110是navy3的读虚拟IP。

三、部署

1、mysql和mysql-mmm的安装及mysql的主从配置请查看前面的博文:http://navyaijm.blog.51cto.com/4647068/1230674,这里只介绍mmm多实例配置。

2、db1上:

vi /etc/mysql-mmm/mmm_common_navy2.conf(navy2的配置文件)

 1 vi /etc/mysql-mmm/mmm_common_navy2.conf
 2 active_master_role      writer
 3 <host default>
 4 cluster_interface       eth1
 5 agent_port              9912
 6 mysql_port              3307
 7 pid_path                /var/run/mysql-mmm/mmm_agentd_navy2.pid
 8 bin_path                /usr/libexec/mysql-mmm/
 9 replication_user        slave
10 replication_password    123456
11 agent_user              mmm_agent
12 agent_password          123456
13 </host>
14 <host db1>
15 ip      172.28.26.101
16 mysql_port              3307
17 mode    master
18 peer    db2
19 </host>
20 <host db2>
21 ip      172.28.26.102
22 mysql_port              3307
23 mode    master
24 peer    db1
25 </host>
26 <role writer>
27 hosts   db1, db2
28 ips     172.28.26.107
29 mode    exclusive
30 </role>
31 <role reader>
32 hosts   db1, db2
33 ips     172.28.26.108
34 mode    balanced
35 </role>

vi /etc/mysql-mmm/mmm_common_navy3.conf(navy的配置文件)

 1 active_master_role      writer
 2 <host default>
 3 cluster_interface       eth1
 4 agent_port              9913
 5 mysql_port              3308
 6 pid_path                /var/run/mysql-mmm/mmm_agentd_navy3.pid
 7 bin_path                /usr/libexec/mysql-mmm/
 8 replication_user        slave
 9 replication_password    123456
10 agent_user              mmm_agent
11 agent_password          123456
12 </host>
13 <host db1>
14 ip      172.28.26.101
15 mysql_port              3308
16 mode    master
17 peer    db2
18 </host>
19 <host db2>
20 ip      172.28.26.102
21 mysql_port              3308
22 mode    master
23 peer    db1
24 </host>
25 <role writer>
26 hosts   db1, db2
27 ips     172.28.26.109
28 mode    exclusive
29 </role>
30 <role reader>
31 hosts   db1, db2
32 ips     172.28.26.110
33 mode    balanced
34 </role>

vi /etc/mysql-mmm/mmm_agent_navy2.conf(navy2的agent配置文件)

1 include mmm_common_navy2.conf
2 this db1

vi /etc/mysql-mmm/mmm_agent_navy3.conf(navy3的agent配置文件)

1 include mmm_common_navy3.conf
2 this db1

vi /etc/init.d/mysql-mmm-agent-navy2(navy2的agent启动脚本)

 1 #!/bin/sh
 2 # chkconfig: - 64 36
 3 # description:  MMM Agent.
 4 # processname: mmm_agentd
 5 # config: /etc/mysql-mmm/mmm_agent.conf
 6 # pidfile: /var/run/mysql-mmm/mmm_agentd.pid
 7 # Source function library and defaults file.
 8 . /etc/rc.d/init.d/functions
 9 . /etc/default/mysql-mmm-agent
10 # Cluster name (it can be empty for default cases)
11 CLUSTER=‘navy2‘
12 LOCKFILE=‘/var/lock/subsys/mysql-mmm-agent_navy2‘
13 prog=‘MMM Agent Daemon‘
14 #-----------------------------------------------------------------------
15 # Paths
16 if [ "$CLUSTER" != "" ]; then
17 MMMD_AGENT_BIN="/usr/sbin/mmm_agentd @$CLUSTER"
18 MMMD_AGENT_PIDFILE="/var/run/mysql-mmm/mmm_agentd_$CLUSTER.pid"
19 else
20 MMMD_AGENT_BIN="/usr/sbin/mmm_agentd"
21 MMMD_AGENT_PIDFILE="/var/run/mysql-mmm/mmm_agentd.pid"
22 fi
23 start() {
24 if [ "${ENABLED}" != "1" ]; then
25 echo "$prog is disabled!"
26 exit 1
27 fi
28 echo -n "Starting $prog: "
29 if [ -s $MMMD_AGENT_PIDFILE ] && kill -0 `cat $MMMD_AGENT_PIDFILE` 2> /dev/null; then
30 echo " already running."
31 exit 0
32 fi
33 daemon $MMMD_AGENT_BIN
34 RETVAL=$?
35 echo
36 [ $RETVAL = 0 ] && touch $LOCKFILE
37 return $RETVAL
38 }
39 stop() {
40 # Stop daemon.
41 echo -n "Stopping $prog: "
42 killproc -p $MMMD_AGENT_PIDFILE $MMMD_AGENT_BIN
43 RETVAL=$?
44 echo
45 [ $RETVAL = 0 ] && rm -f $LOCKFILE
46 return $RETVAL
47 }
48 case "$1" in
49 start)
50 start
51 ;;
52 stop)
53 stop
54 ;;
55 status)
56 status -p $MMMD_AGENT_PIDFILE $MMMD_AGENT_BIN
57 RETVAL=$?
58 ;;
59 restart|reload)
60 stop
61 start
62 ;;
63 condrestart)
64 if [ -f $LOCKFILE ]; then
65 stop
66 start
67 fi
68 ;;
69 *)
70 echo "Usage: $0 {start|stop|restart|condrestart|status}"
71 ;;
72 esac
73 exit $RETVAL

赋予执行权限:

1 chmod +x /etc/init.d/mysql-mmm-agent-navy2

vi /etc/init.d/mysql-mmm-agent-navy3(navy3的agent启动脚本)

 1 #!/bin/sh
 2 # chkconfig: - 64 36
 3 # description:  MMM Agent.
 4 # processname: mmm_agentd
 5 # config: /etc/mysql-mmm/mmm_agent.conf
 6 # pidfile: /var/run/mysql-mmm/mmm_agentd.pid
 7 # Source function library and defaults file.
 8 . /etc/rc.d/init.d/functions
 9 . /etc/default/mysql-mmm-agent
10 ## Paths
11 #MMMD_AGENT_BIN="/usr/sbin/mmm_agentd"
12 #MMMD_AGENT_PIDFILE="/var/run/mysql-mmm/mmm_agentd.pid"
13 #LOCKFILE=‘/var/lock/subsys/mysql-mmm-agent‘
14 #prog=‘MMM Agent Daemon‘
15 # Cluster name (it can be empty for default cases)
16 CLUSTER=‘navy3‘
17 LOCKFILE=‘/var/lock/subsys/mysql-mmm-agent_navy3‘
18 prog=‘MMM Agent Daemon‘
19 #-----------------------------------------------------------------------
20 # Paths
21 if [ "$CLUSTER" != "" ]; then
22 MMMD_AGENT_BIN="/usr/sbin/mmm_agentd @$CLUSTER"
23 MMMD_AGENT_PIDFILE="/var/run/mysql-mmm/mmm_agentd_$CLUSTER.pid"
24 else
25 MMMD_AGENT_BIN="/usr/sbin/mmm_agentd"
26 MMMD_AGENT_PIDFILE="/var/run/mysql-mmm/mmm_agentd.pid"
27 fi
28 start() {
29 if [ "${ENABLED}" != "1" ]; then
30 echo "$prog is disabled!"
31 exit 1
32 fi
33 echo -n "Starting $prog: "
34 if [ -s $MMMD_AGENT_PIDFILE ] && kill -0 `cat $MMMD_AGENT_PIDFILE` 2> /dev/null; then
35 echo " already running."
36 exit 0
37 fi
38 daemon $MMMD_AGENT_BIN
39 RETVAL=$?
40 echo
41 [ $RETVAL = 0 ] && touch $LOCKFILE
42 return $RETVAL
43 }
44 stop() {
45 # Stop daemon.
46 echo -n "Stopping $prog: "
47 killproc -p $MMMD_AGENT_PIDFILE $MMMD_AGENT_BIN
48 RETVAL=$?
49 echo
50 [ $RETVAL = 0 ] && rm -f $LOCKFILE
51 return $RETVAL
52 }
53 case "$1" in
54 start)
55 start
56 ;;
57 stop)
58 stop
59 ;;
60 status)
61 status -p $MMMD_AGENT_PIDFILE $MMMD_AGENT_BIN
62 RETVAL=$?
63 ;;
64 restart|reload)
65 stop
66 start
67 ;;
68 condrestart)
69 if [ -f $LOCKFILE ]; then
70 stop
71 start
72 fi
73 ;;
74 *)
75 echo "Usage: $0 {start|stop|restart|condrestart|status}"
76 ;;
77 esac
78 exit $RETVAL

赋予可执行权限:

1 chmod +x /etc/init.d/mysql-mmm-agent-navy3

启动服务:

1 /etc/init.d/mysql-mmm-agent-navy2 start
2 /etc/init.d/mysql-mmm-agent-navy3 start

3、db2上:

复制db1上的文件到相应的目录下:

1 scp /etc/mysql-mmm/mmm_common_navy2.conf 172.28.26.102:/etc/mysql-mmm/
2 scp /etc/mysql-mmm/mmm_common_navy3.conf 172.28.26.102:/etc/mysql-mmm/
3 scp /etc/mysql-mmm/mmm_agent_navy2.conf 172.28.26.102:/etc/mysql-mmm/
4 scp /etc/mysql-mmm/mmm_agent_navy3.conf 172.28.26.102:/etc/mysql-mmm/
5 scp /etc/init.d/mysql-mmm-agent-navy2 172.28.26.102:/etc/init.d/
6 scp /etc/init.d/mysql-mmm-agent-navy3 172.28.26.102:/etc/init.d/

修改agent配置文件:

1 sed -i ‘s/this db1/this db2/‘ /etc/mysql-mmm/mmm_agent_navy2.conf
2 sed -i ‘s/this db1/this db2/‘ /etc/mysql-mmm/mmm_agent_navy3.conf

赋予可执行权限

1 chmod +x /etc/init.d/mysql-mmm-agent-navy2
2 chmod +x /etc/init.d/mysql-mmm-agent-navy3

启动服务:

1 /etc/init.d/mysql-mmm-agent-navy2 start
2 /etc/init.d/mysql-mmm-agent-navy3 start

4、monitor上

复制db1上的配置文件:

1 scp /etc/mysql-mmm/mmm_common_navy2.conf 172.28.26.103:/etc/mysql-mmm/
2 scp /etc/mysql-mmm/mmm_common_navy3.conf 172.28.26.103:/etc/mysql-mmm/

vi /etc/mysql-mmm/mmm_mon_navy2.conf

 1 include mmm_common_navy2.conf
 2 <monitor>
 3 ip                  127.0.0.1
 4 port                9992
 5 pid_path            /var/run/mysql-mmm/mmm_mond_navy2.pid
 6 bin_path            /usr/libexec/mysql-mmm
 7 status_path         /var/lib/mysql-mmm/mmm_mond_navy2.status
 8 ping_ips            172.28.26.101,172.28.26.102
 9 auto_set_online     10
10 # wait_for_other_master 2
11 # The kill_host_bin does not exist by default, though the monitor will
12 # throw a warning about it missing.  See the section 5.10 "Kill Host
13 # Functionality" in the PDF documentation.
14 #
15 # kill_host_bin     /usr/libexec/mysql-mmm/monitor/kill_host
16 #
17 </monitor>
18 <host default>
19 monitor_user        mmm_monitor
20 monitor_password    123456
21 </host>
22 debug 0

vi /etc/mysql-mmm/mmm_mon_navy3.conf

 1 include mmm_common_navy3.conf
 2 <monitor>
 3 ip                  127.0.0.1
 4 port                9993
 5 pid_path            /var/run/mysql-mmm/mmm_mond_navy3.pid
 6 bin_path            /usr/libexec/mysql-mmm
 7 status_path         /var/lib/mysql-mmm/mmm_mond_navy3.status
 8 ping_ips            172.28.26.101,172.28.26.102
 9 auto_set_online     10
10 # wait_for_other_master 2
11 # The kill_host_bin does not exist by default, though the monitor will
12 # throw a warning about it missing.  See the section 5.10 "Kill Host
13 # Functionality" in the PDF documentation.
14 #
15 # kill_host_bin     /usr/libexec/mysql-mmm/monitor/kill_host
16 #
17 </monitor>
18 <host default>
19 monitor_user        mmm_monitor
20 monitor_password    123456
21 </host>
22 debug 0

vi /etc/mysql-mmm/mmm_mon_log_navy2.conf

 1 #log4perl.logger = FATAL, MMMLog, MailFatal
 2 log4perl.logger = FATAL, MMMLog
 3 log4perl.appender.MMMLog = Log::Log4perl::Appender::File
 4 log4perl.appender.MMMLog.Threshold = INFO
 5 log4perl.appender.MMMLog.filename = /var/log/mysql-mmm/mmm_mond_navy2.log
 6 log4perl.appender.MMMLog.recreate = 1
 7 log4perl.appender.MMMLog.layout = PatternLayout
 8 log4perl.appender.MMMLog.layout.ConversionPattern = %d %5p %m%n
 9 #log4perl.appender.MailFatal = Log::Dispatch::Email::MailSender
10 #log4perl.appender.MailFatal.Threshold = FATAL
11 #log4perl.appender.MailFatal.from = [email protected]
12 #log4perl.appender.MailFatal.to = root
13 #log4perl.appender.MailFatal.buffered = 0
14 #log4perl.appender.MailFatal.subject = FATAL error in mysql-mmm-monitor
15 #log4perl.appender.MailFatal.layout = PatternLayout
16 #log4perl.appender.MailFatal.layout.ConversionPattern = %d %m%n

vi /etc/mysql-mmm/mmm_mon_log_navy3.conf

 1 #log4perl.logger = FATAL, MMMLog, MailFatal
 2 log4perl.logger = FATAL, MMMLog
 3 log4perl.appender.MMMLog = Log::Log4perl::Appender::File
 4 log4perl.appender.MMMLog.Threshold = INFO
 5 log4perl.appender.MMMLog.filename = /var/log/mysql-mmm/mmm_mond_navy3.log
 6 log4perl.appender.MMMLog.recreate = 1
 7 log4perl.appender.MMMLog.layout = PatternLayout
 8 log4perl.appender.MMMLog.layout.ConversionPattern = %d %5p %m%n
 9 #log4perl.appender.MailFatal = Log::Dispatch::Email::MailSender
10 #log4perl.appender.MailFatal.Threshold = FATAL
11 #log4perl.appender.MailFatal.from = [email protected]
12 #log4perl.appender.MailFatal.to = root
13 #log4perl.appender.MailFatal.buffered = 0
14 #log4perl.appender.MailFatal.subject = FATAL error in mysql-mmm-monitor
15 #log4perl.appender.MailFatal.layout = PatternLayout
16 #log4perl.appender.MailFatal.layout.ConversionPattern = %d %m%n

vi /etc/init.d/mysql-mmm-monitor-navy2

 1 #!/bin/sh
 2 #
 3 # mysql-mmm-monitor  This shell script takes care of starting and stopping
 4 #                    the mmm monitoring daemon.
 5 #
 6 # chkconfig: - 64 36
 7 # description:  MMM Monitor.
 8 # processname: mmm_mond
 9 # config: /etc/mysql-mmm/mmm_mon.conf
10 # pidfile: /var/run/mysql-mmm/mmm_mond.pid
11 # Source function library and defaults file.
12 . /etc/rc.d/init.d/functions
13 . /etc/default/mysql-mmm-monitor
14 # Cluster name (it can be empty for default cases)
15 CLUSTER=‘navy2‘
16 LOCKFILE=‘/var/lock/subsys/mysql-mmm-monitor_navy2‘
17 prog=‘MMM Monitor Daemon‘
18 if [ "$CLUSTER" != "" ]; then
19 MMMD_MON_BIN="/usr/sbin/mmm_mond @$CLUSTER"
20 MMMD_MON_PIDFILE="/var/run/mysql-mmm/mmm_mond-$CLUSTER.pid"
21 else
22 MMMD_MON_BIN="/usr/sbin/mmm_mond"
23 MMMD_MON_PIDFILE="/var/run/mysql-mmm/mmm_mond.pid"
24 fi
25 start() {
26 if [ "${ENABLED}" != "1" ]; then
27 echo "$prog is disabled!"
28 exit 1
29 fi
30 echo -n "Starting $prog: "
31 if [ -s $MMMD_MON_PIDFILE ] && kill -0 `cat $MMMD_MON_PIDFILE` 2> /dev/null; then
32 echo " already running."
33 exit 0
34 fi
35 daemon $MMMD_MON_BIN
36 RETVAL=$?
37 echo
38 [ $RETVAL = 0 ] && touch $LOCKFILE
39 return $RETVAL
40 }
41 stop() {
42 # Stop daemon.
43 echo -n "Stopping $prog: "
44 killproc -p $MMMD_MON_PIDFILE $MMMD_MON_BIN
45 RETVAL=$?
46 echo
47 [ $RETVAL = 0 ] && rm -f $LOCKFILE
48 return $RETVAL
49 }
50 case "$1" in
51 start)
52 start
53 ;;
54 stop)
55 stop
56 ;;
57 status)
58 status -p $MMMD_MON_PIDFILE $MMMD_MON_BIN
59 RETVAL=$?
60 ;;
61 restart|reload)
62 stop
63 start
64 ;;
65 condrestart)
66 if [ -f $LOCKFILE ]; then
67 stop
68 start
69 fi
70 ;;
71 *)
72 echo "Usage: $0 {start|stop|restart|condrestart|status}"
73 ;;
74 esac
75 exit $RETVAL

vi /etc/init.d/mysql-mmm-monitor-navy3

 1 #!/bin/sh
 2 #
 3 # mysql-mmm-monitor  This shell script takes care of starting and stopping
 4 #                    the mmm monitoring daemon.
 5 #
 6 # chkconfig: - 64 36
 7 # description:  MMM Monitor.
 8 # processname: mmm_mond
 9 # config: /etc/mysql-mmm/mmm_mon.conf
10 # pidfile: /var/run/mysql-mmm/mmm_mond.pid
11 # Source function library and defaults file.
12 . /etc/rc.d/init.d/functions
13 . /etc/default/mysql-mmm-monitor
14 # Cluster name (it can be empty for default cases)
15 CLUSTER=‘navy3‘
16 LOCKFILE=‘/var/lock/subsys/mysql-mmm-monitor_navy3‘
17 prog=‘MMM Monitor Daemon‘
18 if [ "$CLUSTER" != "" ]; then
19 MMMD_MON_BIN="/usr/sbin/mmm_mond @$CLUSTER"
20 MMMD_MON_PIDFILE="/var/run/mysql-mmm/mmm_mond-$CLUSTER.pid"
21 else
22 MMMD_MON_BIN="/usr/sbin/mmm_mond"
23 MMMD_MON_PIDFILE="/var/run/mysql-mmm/mmm_mond.pid"
24 fi
25 start() {
26 if [ "${ENABLED}" != "1" ]; then
27 echo "$prog is disabled!"
28 exit 1
29 fi
30 echo -n "Starting $prog: "
31 if [ -s $MMMD_MON_PIDFILE ] && kill -0 `cat $MMMD_MON_PIDFILE` 2> /dev/null; then
32 echo " already running."
33 exit 0
34 fi
35 daemon $MMMD_MON_BIN
36 RETVAL=$?
37 echo
38 [ $RETVAL = 0 ] && touch $LOCKFILE
39 return $RETVAL
40 }
41 stop() {
42 # Stop daemon.
43 echo -n "Stopping $prog: "
44 killproc -p $MMMD_MON_PIDFILE $MMMD_MON_BIN
45 RETVAL=$?
46 echo
47 [ $RETVAL = 0 ] && rm -f $LOCKFILE
48 return $RETVAL
49 }
50 case "$1" in
51 start)
52 start
53 ;;
54 stop)
55 stop
56 ;;
57 status)
58 status -p $MMMD_MON_PIDFILE $MMMD_MON_BIN
59 RETVAL=$?
60 ;;
61 restart|reload)
62 stop
63 start
64 ;;
65 condrestart)
66 if [ -f $LOCKFILE ]; then
67 stop
68 start
69 fi
70 ;;
71 *)
72 echo "Usage: $0 {start|stop|restart|condrestart|status}"
73 ;;
74 esac
75 exit $RETVAL

赋予可执行权限:

1 chmod +x /etc/init.d/mysql-mmm-monitor-navy2
2 chmod +x /etc/init.d/mysql-mmm-monitor-navy3

启动监控服务:

/etc/init.d/mysql-mmm-monitor-navy2 start
/etc/init.d/mysql-mmm-monitor-navy3 start

结果

 1 [[email protected] ~]# mmm_control  show
 2 db1(172.28.26.101) master/ONLINE. Roles: writer(172.28.26.104)
 3 db2(172.28.26.102) master/ONLINE. Roles:
 4 db3(172.28.26.188) slave/ONLINE. Roles: reader(172.28.26.105)
 5 db4(172.28.26.189) slave/ONLINE. Roles: reader(172.28.26.106)
 6 [[email protected] ~]# mmm_control @navy2 show
 7 db1(172.28.26.101) master/ONLINE. Roles: writer(172.28.26.107)
 8 db2(172.28.26.102) master/ONLINE. Roles: reader(172.28.26.108)
 9 [[email protected] ~]# mmm_control @navy3 show
10 db1(172.28.26.101) master/ONLINE. Roles: writer(172.28.26.109)
11 db2(172.28.26.102) master/ONLINE. Roles: reader(172.28.26.110)
12 [[email protected] ~]#

时间: 2024-11-25 20:16:20

[转](多实例)mysql-mmm集群的相关文章

MMM高可用MySQL服务集群解决方案

MMM高可用方案简介 MMM(Master-Master Replication Manager for MySQL)主主复制管理器,是一套提供了MySQL主主复制配置的监控.故障迁移和管理的可伸缩的脚本程序.在MMM高可用解决方案中,可以配置双主多从架构,通过MySQL Replication技术可以实现两台MySQL服务器互为主从,并且在任何时候只有一个节点可以写入,避免多节点写入的数据冲突,同时,当可写节点故障时,MMM套件可以立即监控到,然后将服务自动切换到另一个主节点继续提供服务,从而

基于MMM搭建MySQL Replication集群高可用架构

MMM介绍 MMM是Multi-Master Replication Manager for MySQL的缩写,它是MySQL提供的一个多主复制管理器,其核心是使用perl语言编写的一组脚本.实际上MMM是比较早期甚至有点老的一种用于构建高可用MySQL架构的方式,但因其还有一定的应用场景,所以本文将会演示一下如何搭建一个MMM架构. MMM 由两个组件组成: monitor:监控集群内数据库的状态,在出现异常时发布切换命令,一般和数据库分开部署 agent:运行在每个 MySQL 服务器上的代

MySQL分布式集群之MyCAT(转)

原文地址:http://blog.itpub.net/29510932/viewspace-1664499/ 隔了好久,才想起来更新博客,最近倒腾的数据库从Oracle换成了MySQL,研究了一段时间,感觉社区版的MySQL在各个方面都逊色于Oracle,Oracle真的好方便!好了,不废话,这次准备记录一些关于MySQL分布式集群搭建的一个东东,MyCAT,我把他理解为一个MySQL代理.-----------------------------------------------------

MySQL分布式集群之MyCAT(一)简介【转】

隔了好久,才想起来更新博客,最近倒腾的数据库从Oracle换成了MySQL,研究了一段时间,感觉社区版的MySQL在各个方面都逊色于Oracle,Oracle真的好方便!好了,不废话,这次准备记录一些关于MySQL分布式集群搭建的一个东东,MyCAT,我把他理解为一个MySQL代理.-----------------------------------------------------------------重要的TIPs------------------------------------

Step By Step 搭建 MySql MHA 集群

关于MHA ?? MHA(Master High Availability)是一款开源的mysql高可用程序,目前在mysql高可用方面是一个相对成熟的解决方案.MHA 搭建的前提是MySQL集群中已经搭建了MySql Replication环境,有了Master/Slave节点.MHA的主要作用就是监测到Master节点故障时会提升主从复制环境中拥有最新数据的Slave节点成为新的master节点.同时,在切换master期间,MHA会通过从其他的Slave节点来获取额外的信息来避免一致性的问

Mysql高级集群-读写分离Amoeba

一.环境介绍Master-IP:10.0.0.201Slave- IP:10.0.0.202Amobea-IP:10.0.0.203 二.安装JDK# mkdir /Amoeba# tar -xvf jdk-7u40-linux-x64.tar.gz -C /Amoeba/# vim /etc/profileJAVA_HOME=/Amoeba/jdk1.7.0_40export JAVA_HOME PATH=$JAVA_HOME/bin:$PATHexport PATH CLASSPATH=.:

centos7 mysql cluster集群搭建基于docker

1.准备 mn:集群管理服务器用于管理集群的其他节点.我们可以从管理节点创建和配置集群上的新节点.重新启动.删除或备份节点. db2/db3:这是节点间同步和数据复制的过程发生的层. db4/db5:应用程序使用的接口服务器连接到数据库集群. [[email protected] ~]# docker inspect -f '{{.Name}} - {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $(docker ps -aq)

nginx+apache+php+mysql服务器集群搭建

nginx+apache+php+mysql服务器集群搭建 由于需要搭建了一个基本的服务器集群.具体的配置方案先不说了,到有时间的时候再介绍.下面介绍下整个方案的优点. 我总共准备了四台阿里云的主机,架设分别是A,B1,B2,C,A在集群的最前面,B1和B2在A的后面,C在最后面.A主要用的nginx,用nginx做反向代理的功能实在是强大.nginx把来自80的http请求都转发到B1和B2上,B1和B2主要是两台apache,用于php解析.B1和B2来连接C上的mysql.A上的nginx

MySQL数据库集群进行正确配置步骤

MySQL数据库集群进行正确配置步骤 2010-06-09 10:47 arrowcat 博客园 字号:T | T 我们今天是要和大家一起分享的是对MySQL数据库集群进行正确配置,我前两天在相关网站看见的资料,今天拿出来供大家分享. AD:51CTO 网+首届APP创新评选大赛火热启动——超百万资源等你拿! 此文章主要向大家讲述的是对MySQL数据库集群进行正确配置的实际操作步骤,以及对其概念的讲述,如果你对其相关的实际操作有兴趣了解的话,以下的文章将会给你提供相关的知识. 一.介绍 这篇文档

mysql 主从复制集群搭建

话说一个正确的文章能敌千钧万马,一句善意的点拨能敌百万雄狮,一个好友的帮助能让你拨开云雾见青天.搭建mysql主从同步,这两天看网上的博客教程很多,当然,错误的文章会误导你很多,我就被误导了.现将这两天的搭建过程详细记录: 前期准备:关闭防火墙 关闭SELINUX 关闭SELINUX vi /etc/selinux/config #SELINUX=enforcing #注释掉 #SELINUXTYPE=targeted #注释掉 SELINUX=disabled #增加 :wq  #保存退出 s