Percona-mysql-5.5.38双主复制&mmm配置

一、   说明

解决数据库单mysql主节点的单点故障问题。便于数据库的切换。

二、   原理

从mysql的主从模式延伸而来,两个mysql节点互为主从,可读可写。

三、   测试环境描述

192.168.0.54(db54)                                 CentOS_6.5x64     Percona_mysql-5.5.38

192.168.0.108              (db108)              CentOS_6.5x64     Percona_mysql-5.5.38

四、   配置过程

1.    安装mysql(可以使用之前写的一键安装脚本)

2.    检查两台DB是否开启bin_log

mysql>show variables like ‘log_bin‘;

+---------------+-------+

|Variable_name | Value |

+---------------+-------+

|log_bin       | ON    |

+---------------+-------+

3.    两台服务器修改server-id,并重启mysql

db54修改为:server-id = 54

db108修改为:server-id = 108

重启:

# /etc/init.d/mysqlrestart

4.    两台服务器之间相互开启3306端口

db54:

-AINPUT -s 192.168.0.108/32 -m state --state NEW -m tcp -p tcp --dport 3306 -jACCEPT

db108:

-AINPUT -s 192.168.0.54/32 -m state --state NEW -m tcp -p tcp --dport 3306 -jACCEPT

5.    以db54为主库,db108为从库

5.1  主库db54建立slave同步数据的用户

mysql>grant replication client,replication slave on *.* to [email protected]‘192.168.0.108‘identified by ‘123456‘;

mysql>flush privileges;

5.2  清空日志

mysql>flush master;

5.3  从库db108指定主服务器

mysql>change master to master_host=‘192.168.0.54‘,master_user=‘repl‘,master_password=‘123456‘;

5.4  启动从服务器进程并查看运行状态

mysql>start slave;

mysql>show slave status\G

如果出现如下行,则表明正常启动:

Slave_IO_Running: Yes

Slave_SQL_Running: Yes

5.5  测试db54到db108的主从同步

5.5.1       主库db54上创建jibuqi数据库:

mysql>create database jibuqi;

mysql>showdatabases;

5.5.2       从库db108查看结果:

mysql>showdatabases;

结果正常。

6.    以db108为主库,db54为从库

6.1  db54的[mysqld]中配置文件中添加配置:

auto-increment-increment = 2                    #整个结构中服务器的总数

auto-increment-offset = 1                           #设定数据库中自动增长的起点,两个db不能相同,否则主键冲突

replicate-do-db = jibuqi                                                    #指定同步的数据库,其他数据库不同步

重启mysql:

# /etc/init.d/mysqlrestart

mysql>show master status;

+------------------+----------+--------------+------------------+

|File             | Position |Binlog_Do_DB | Binlog_Ignore_DB |

+------------------+----------+--------------+------------------+

|mysql-bin.000003 |      107 |              |                  |

+------------------+----------+--------------+------------------+

6.2  db108创建同步数据的mysql用户:

mysql>grant replication client,replication slave on *.* to [email protected]‘192.168.0.108‘identified by ‘123456‘;

mysql>flush privileges;

db108的配置文件[mysqld]中添加配置:

log-bin = mysql-bin

auto-increment-increment = 2

auto-increment-offset = 2  # 与db54不能相同

replicate-do-db = jibuqi

重启mysql:

# /etc/init.d/mysqlrestart

mysql>show master status;

+------------------+----------+--------------+------------------+

|File             | Position |Binlog_Do_DB | Binlog_Ignore_DB |

+------------------+----------+--------------+------------------+

|mysql-bin.000005 |      107 |              |                  |

+------------------+----------+--------------+------------------+

6.3  db54和db108分别指定对方为自己的主数据库:

db108服务器的指向:

mysql>stop slave;

mysql>change master to master_host=‘192.168.0.54‘,master_user=‘repl‘,master_password=‘123456‘,master_log_file=‘mysql-bin.000003‘,master_log_pos=107;

mysql>start slave;

db54服务器的指向:

mysql>change master to master_host=‘192.168.0.108‘,master_user=‘repl‘,master_password=‘123456‘,master_log_file=‘mysql-bin.000005‘,master_log_pos=107;

mysql>start slave;

6.4  测试:

db54的jibuqi数据库导入数据表api_pedometeraccount,检查db108上是是否有相应table(检查结果正常)。

db108的jibuqi数据库导入数据表api_pedometerdevice,检查db54上是是否有相应table(检查结果正常)。

至此,双主同步的模式完成。

五、   mysql-MMMMaster-Master Replication Manager for MySQL

5.1 简介

MMM即Master-Master Replication Manager for MySQL(mysql主主复制管理器)关于mysql主主复制配置的监控、故障转移和管理的一套可伸缩的脚本套件(在任何时候只有一个节点可以被写入),这个套件也能对居于标准的主从配置的任意数量的从服务器进行读负载均衡,所以你可以用它来在一组居于复制的服务器启动虚拟ip,除此之外,它还有实现数据备份、节点之间重新同步功能的脚本。MySQL本身没有提供replication failover的解决方案,通过MMM方案能实现服务器的故障转移,从而实现mysql的高可用。MMM不仅能提供浮动IP的功能,更可贵的是如果当前的主服务器挂掉后,会将你后端的从服务器自动转向新的主服务器进行同步复制,不用手工更改同步配置。这个方案是目前比较成熟的解决方案。详情请看官网:http://mysql-mmm.org

5.2 结构说明

a. 服务器列表


服务器


主机名


IP


serverID


mysql版本


系统


master1


db1


192.168.1.19


54


mysql5.5


Centos 6.5


master2


db2


192.168.1.20


108


mysql5.5


Centos 6.5

b. 虚拟IP列表


VIP


Role


description


192.168.1.190


write


应用配置的写入VIP


192.168.1.201


read


应用配置的读入VIP


192.168.1.203


read


应用配置的读入VIP

5.3 MMM的安装(安装在db1中,可以单独准备一台服务器来安装)

a. 升级perl模块

# yuminstall –y cpan

# cpan-i Algorithm::Diff Class::Singleton DBI DBD::mysql Log::Dispatch Log::Log4perlMail::Send Net::Ping Proc::Daemon Time::HiRes Params::Validate Net::ARP

这两个File::Basename File::stat模块好像安装不上,要升级5.12.2才可以,不安装也可以使用;Net:ARP必须要安装,要不然VIP会出不来的。

b. 下载并安装mysql-mmm

# wget http://mysql-mmm.org/_media/:mmm2:mysql-mmm-2.2.1.tar.gz-O mysql-mmm-2.2.1.tar.gz

# tar-zxvf mysql-mmm-2.2.1.tar.gz

# cdmysql-mmm-2.2.1

# make;makeinstall

c. mysql-mmm配置

直接修改配置文件

·  修改db1的配置文件:

mmm_agent.conf:

# vim/etc/mysql_mmm/mmm_agent

include mmm_common.conf

this db1

mmm_common.conf:

# vim/etc/mysql_mmm/mmm_common.conf

active_master_role      writer

<host default>

cluster_interface       eth0

pid_path               /var/run/mysql-mmm/mmm_agentd.pid

bin_path               /usr/libexec/mysql-mmm/

replication_user        repl

replication_password    123456

agent_user              mmm_agent

agent_password          mmm_agent

</host>

<host db1>

ip      192.168.1.19

mode    master

peer    db2

</host>

<host db2>

ip      192.168.1.20

mode    master

peer    db1

</host>

<role writer>

hosts   db1, db2

ips     192.168.1.190

mode    exclusive # exclusive是排他模式,任何时候只能一个host拥有该角色

</role>

<role reader>

hosts   db1, db2

ips     192.168.1.201,192.168.1.203

mode    balanced # balanced均衡模式,可以多个host同时拥有此角色

</role>

mmm_mon.conf

# vim/etc/mysql_mmm/mmm_mon.conf

include mmm_common.conf

<monitor>

ip                                             127.0.0.1

pid_path                               /var/run/mmm_mond.pid

bin_path                                /usr/lib/mysql-mmm/

status_path                            /var/lib/misc/mmm_mond.status

auto_set_online                        5 #自动切换的时间,单位为秒

ping_ips                               192.168.1.19, 192.168.1.20

</monitor>

<host default>

monitor_user                   mmm_monitor

monitor_password                mmm_monitor

</host>

debug 0

·  修改db2的配置文件:

mmm_agent.conf:

# vim /etc/mysql_mmm/mmm_agent

includemmm_common.conf

thisdb2

mmm_common.conf的配置文件内容与db1的相同。

d. 启动mmm程序:

db1启动agent和mon:

# /etc/init.d/mysql-mmm-agent start

Daemonbin: ‘/usr/sbin/mmm_agentd‘

Daemonpid: ‘/var/run/mmm_agentd.pid‘

StartingMMM Agent daemon... Ok

# /etc/init.d/mysql-mmm-monitor start

Daemon bin: ‘/usr/sbin/mmm_mond‘

Daemon pid: ‘/var/run/mmm_mond.pid‘

Starting MMM Monitor daemon: Baseclass package "Class::Singleton" is empty.

(Perhaps you need to ‘use‘ the module which defines that package first,

or make that module available in @INC (@INC contains:/usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl/usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .).

at/usr/share/perl5/vendor_perl/MMM/Monitor/Agents.pm line 2

BEGIN failed--compilation abortedat /usr/share/perl5/vendor_perl/MMM/Monitor/Agents.pm line 2.

Compilation failed in require at/usr/share/perl5/vendor_perl/MMM/Monitor/Monitor.pm line 15.

BEGIN failed--compilation abortedat /usr/share/perl5/vendor_perl/MMM/Monitor/Monitor.pm line 15.

Compilation failed in require at/usr/sbin/mmm_mond line 28.

BEGIN failed--compilation abortedat /usr/sbin/mmm_mond line 28.

Failed

启动mysql-mmm-monitor失败,修复方法:

# perl -MCPAN -e shell

Terminaldoes not support AddHistory.

cpanshell -- CPAN exploration and modules installation (v1.9402)

Enter‘h‘ for help.

cpan[1]> Class::Singleton

Catchingerror: "Can‘t locate object method \"Singleton\" via package\"Class\" (perhaps you forgot to load \"Class\"?) at/usr/share/perl5/CPAN.pm line 375, <FIN> line 1.\cJ" at /usr/share/perl5/CPAN.pmline 391

CPAN::shell() called at -e line 1

cpan[2]>Class

Unknownshell command ‘Class‘. Type ? for help.

cpan[3]> install Class::Singleton

CPAN:Storable loaded ok (v2.20)

Goingto read ‘/root/.cpan/Metadata‘

Database was generated on Thu, 27 Nov 201408:53:16 GMT

CPAN:LWP::UserAgent loaded ok (v5.833)

CPAN:Time::HiRes loaded ok (v1.9726)

Warning:no success downloading ‘/root/.cpan/sources/authors/01mailrc.txt.gz.tmp47425‘.Giving up on it. at /usr/share/perl5/CPAN/Index.pm line 225

Fetchingwith LWP:

http://www.perl.org/CPAN/authors/01mailrc.txt.gz

Goingto read ‘/root/.cpan/sources/authors/01mailrc.txt.gz‘

............................................................................DONE

Fetchingwith LWP:

http://www.perl.org/CPAN/modules/02packages.details.txt.gz

Goingto read ‘/root/.cpan/sources/modules/02packages.details.txt.gz‘

Database was generated on Fri, 28 Nov 201408:29:02 GMT

..............

New CPAN.pm version (v2.05) available.

[Currently running version is v1.9402]

You might want to try

install CPAN

reload cpan

to both upgrade CPAN.pm and run the newversion without leaving

the current session.

..............................................................DONE

Fetchingwith LWP:

http://www.perl.org/CPAN/modules/03modlist.data.gz

Goingto read ‘/root/.cpan/sources/modules/03modlist.data.gz‘

DONE

Goingto write /root/.cpan/Metadata

Runninginstall for module ‘Class::Singleton‘

CPAN:Data::Dumper loaded ok (v2.124)

‘YAML‘not installed, falling back to Data::Dumper and Storable to read prefs‘/root/.cpan/prefs‘

Runningmake for S/SH/SHAY/Class-Singleton-1.5.tar.gz

CPAN:Digest::SHA loaded ok (v5.47)

Checksumfor /root/.cpan/sources/authors/id/S/SH/SHAY/Class-Singleton-1.5.tar.gz ok

Scanningcache /root/.cpan/build for sizes

............................................................................DONE

Class-Singleton-1.5/

Class-Singleton-1.5/Changes

Class-Singleton-1.5/lib/

Class-Singleton-1.5/lib/Class/

Class-Singleton-1.5/lib/Class/Singleton.pm

Class-Singleton-1.5/Makefile.PL

Class-Singleton-1.5/MANIFEST

Class-Singleton-1.5/META.yml

Class-Singleton-1.5/README

Class-Singleton-1.5/t/

Class-Singleton-1.5/t/singleton.t

CPAN: File::Temploaded ok (v0.22)

CPAN.pm: Going to buildS/SH/SHAY/Class-Singleton-1.5.tar.gz

Checkingif your kit is complete...

Looksgood

Generatinga Unix-style Makefile

WritingMakefile for Class::Singleton

WritingMYMETA.yml and MYMETA.json

Couldnot read ‘/root/.cpan/build/Class-Singleton-1.5-42kiLS/MYMETA.yml‘. Fallingback to other methods to determine prerequisites

cplib/Class/Singleton.pm blib/lib/Class/Singleton.pm

Manifying1 pod document

SHAY/Class-Singleton-1.5.tar.gz

/usr/bin/make -- OK

Warning(usually harmless): ‘YAML‘ not installed, will not store persistent state

Runningmake test

PERL_DL_NONLAZY=1"/usr/bin/perl" "-MExtUtils::Command::MM""-MTest::Harness" "-e" "undef*Test::Harness::Switches; test_harness(0, ‘blib/lib‘, ‘blib/arch‘)" t/*.t

t/singleton.t.. ok

Alltests successful.

Files=1,Tests=29,  0 wallclock secs ( 0.01usr  0.01 sys +  0.01 cusr 0.00 csys =  0.03 CPU)

Result:PASS

SHAY/Class-Singleton-1.5.tar.gz

/usr/bin/make test -- OK

Warning(usually harmless): ‘YAML‘ not installed, will not store persistent state

Runningmake install

Prepending/root/.cpan/build/Class-Singleton-1.5-42kiLS/blib/arch/root/.cpan/build/Class-Singleton-1.5-42kiLS/blib/lib to PERL5LIB for ‘install‘

Manifying1 pod document

Installing/usr/local/share/perl5/Class/Singleton.pm

Installing/usr/local/share/man/man3/Class::Singleton.3pm

Appendinginstallation info to /usr/lib64/perl5/perllocal.pod

SHAY/Class-Singleton-1.5.tar.gz

/usr/bin/make install  -- OK

Warning(usually harmless): ‘YAML‘ not installed, will not store persistent state

cpan[4]>exit

Terminaldoes not support GetHistory.

Lockfileremoved.

#/etc/init.d/mysql-mmm-monitor start

Daemonbin: ‘/usr/sbin/mmm_mond‘

Daemonpid: ‘/var/run/mmm_mond.pid‘

StartingMMM Monitor daemon: Ok

db2启动agent:

# /etc/init.d/mysql-mmm-agent start

Daemonbin: ‘/usr/sbin/mmm_agentd‘

Daemonpid: ‘/var/run/mmm_agentd.pid‘

StartingMMM Agent daemon... Ok

e. 修改防火墙,根据情况开放mmm端口(方法略)

f. db1和db2添加监控授权用户,用于检测mysql状态。

mysql> grant super,replicationclient,process on *.* to ‘mmm_agent‘@‘192.168.1.20‘ identified by ‘mmm_agent‘;

mysql> grant super,replication client,processon *.* to ‘mmm_agent‘@‘192.168.1.19‘ identified by ‘mmm_agent‘;

mysql > grant super,replicationclient,process on *.* to ‘mmm_agent‘@‘localhost‘ identified by ‘mmm_agent‘;

mysql > grant super,replicationclient,process on *.* to ‘mmm_agent‘@‘127.0.0.1‘ identified by ‘mmm_agent‘;

mysql > grant super,replicationclient,process on *.* to ‘mmm_monitor‘@‘192.168.1.20‘ identified by ‘mmm_monitor‘;

mysql > grant super,replication client,processon *.* to ‘mmm_monitor‘@‘192.168.1.19‘ identified by ‘mmm_monitor’;

mysql > grant super,replicationclient,process on *.* to ‘mmm_monitor‘@‘localhost‘ identified by ‘mmm_ monitor‘;

mysql > grant super,replicationclient,process on *.* to ‘mmm_ monitor‘@‘127.0.0.1‘ identified by ‘mmm_ monitor‘;

mysql > flush privileges;

g. 检查状态(在monitor所在的机器):

# mmm_control ping

OK: Pinged successfully!

# mmm_control show

db1(192.168.1.19) master/ONLINE. Roles: reader(192.168.1.203),writer(192.168.1.190)

db2(192.168.1.20) master/ONLINE. Roles: reader(192.168.1.201)

# mmm_control checks

db2 ping         [last change:2014/12/01 13:49:47]  OK

db2 mysql        [last change:2014/12/01 13:49:47]  OK

db2 rep_threads  [last change:2014/12/01 13:49:47]  OK

db2 rep_backlog  [last change:2014/12/01 13:49:47]  OK: Backlog is null

db1 ping         [last change:2014/12/01 13:49:47]  OK

db1 mysql        [last change:2014/12/01 13:49:47]  OK

db1 rep_threads  [last change:2014/12/01 13:52:19]  OK

db1 rep_backlog  [last change:2014/12/01 13:49:47]  OK: Backlog is null

# mmm_control help

Valid commands are:

help                             - show this message

ping                             - ping monitor

show                             - show status

checks [<host>|all [<check>|all]] - show checks status

set_online <host>                - set host <host> online

set_offline <host>               - set host <host> offline

mode                             - print current mode.

set_active                       - switch into active mode.

set_manual                       - switch into manual mode.

set_passive                      - switch into passive mode.

move_role [--force] <role> <host> - move exclusive role<role> to host <host>

(Onlyuse --force if you know what you are doing!)

set_ip <ip><host>                - set rolewith ip <ip> to host <host>

h. 测试切换

DB1上的信息如下:

#mmm_control show

db1(192.168.1.19) master/ONLINE. Roles:reader(192.168.1.203), writer(192.168.1.190)

db2(192.168.1.20) master/ONLINE. Roles:reader(192.168.1.201)

# ip a

1: lo:<LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN

link/loopback 00:00:00:00:00:00 brd00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

inet6 ::1/128 scope host

valid_lft forever preferred_lft forever

2:eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast stateUP qlen 1000

link/ether 00:0c:29:93:d2:50 brdff:ff:ff:ff:ff:ff

inet 192.168.1.19/23 brd 192.168.1.255scope global eth0

inet 192.168.1.203/32 scope global eth0

inet 192.168.1.190/32 scope global eth0

inet6 fe80::20c:29ff:fe93:d250/64 scopelink

valid_lft forever preferred_lft forever

DB2上的信息:

# ip a

1: lo:<LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN

link/loopback 00:00:00:00:00:00 brd00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

inet6 ::1/128 scope host

valid_lft forever preferred_lft forever

2:eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast stateUP qlen 1000

link/ether 00:0c:29:9f:7c:c6 brdff:ff:ff:ff:ff:ff

inet 192.168.1.20/23 brd 192.168.1.255scope global eth0

inet 192.168.1.201/32 scope global eth0

inet6 fe80::20c:29ff:fe9f:7cc6/64 scopelink

valid_lft forever preferred_lft forever

停掉DB1上的mysql应用,看mmm是否会把所有vips切换到DB2:

DB1上:

#/etc/init.d/mysql stop

Shuttingdown MySQL (Percona Server)..... SUCCESS!

# ip a

1: lo:<LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN

link/loopback 00:00:00:00:00:00 brd00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

inet6 ::1/128 scope host

valid_lft forever preferred_lft forever

2:eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast stateUP qlen 1000

link/ether 00:0c:29:93:d2:50 brdff:ff:ff:ff:ff:ff

inet 192.168.1.19/23 brd 192.168.1.255scope global eth0

inet6 fe80::20c:29ff:fe93:d250/64 scopelink

valid_lft forever preferred_lft forever

DB2上:

# ip a

1: lo:<LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN

link/loopback 00:00:00:00:00:00 brd00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

inet6 ::1/128 scope host

valid_lft forever preferred_lft forever

2:eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast stateUP qlen 1000

link/ether 00:0c:29:9f:7c:c6 brdff:ff:ff:ff:ff:ff

inet 192.168.1.20/23 brd 192.168.1.255scope global eth0

inet 192.168.1.201/32 scope global eth0

inet 192.168.1.203/32 scope global eth0

inet 192.168.1.190/32 scope global eth0

inet6 fe80::20c:29ff:fe9f:7cc6/64 scopelink

valid_lft forever preferred_lft forever

vip切换成功。mmm的切换功能成功。

如果想实现mysql的读写分离,可以通过mysql_proxy实现。

Percona-mysql-5.5.38双主复制&mmm配置

时间: 2024-07-30 06:04:18

Percona-mysql-5.5.38双主复制&mmm配置的相关文章

mysql主主复制(双主复制)配置步骤

以前我们介绍的都是主从复制,这里给各位介绍一个双主复制了,下面都希望两个主服务器数据自动复制的话可参考一下此文章. MySQL主主复制结构区别于主从复制结构.在主主复制结构中,两台服务器的任何一台上面的数据库存发生了改变都会同步到另一台服务器上,这样两台服务器互为主从,并且都能向外提供服务.有了上一节的主从复制,那么主主复制就很容易了. 一.先修改配置文件 服务器A(192.168.1.254)配置如下 log-bin   = mysql-binserver-id = 1 expire-logs

MySQL高可用之双主复制模式

MySQL双主模式高可用实现 生产案例:VIP:10.105.98.211MASTERHOSTNAME IPADDR PORTmy-prod01.oracle.com 192.168.10.97 3306 my-prod02.oracle.com 192.168.10.5 3306 SLAVE HOSTNAME IPADDR PORT my-em01.oracle.com 10.100.10.10.65 3306 两个主库之间复制模式:半同步复制 主库从库之间复制模式:异步复制 keepaliv

MySQL主主复制(双主复制)配置过程介绍

一.修改配置文件my.cnf服务器A(172.16.16.70)配置如下server_id = 70socket = /tmp/mysql.sockinnodb_buffer_pool_size = 10Gcharacter-set-server=utf8log_bin=mysql-binexpire_logs_days=3replicate-do-db=ixinnuo_sjcjbinlog-ignore-db=mysql,information_schemaauto-increment-inc

MariaDB 双主复制的配置

环境     Master1/Master2     系统 IP 数据库版本 Master1     CentOS6.7         10.10.3.211         mariadb-10.1.19     Master2 CentOS6.7 10.10.3.212 mariadb-10.1.19 一.Master1的配置 (1)修改Master1配置文件/etc/my.cnf,修改如下: log-bin = /usr/local/mysqllogs/binlogs/master1-b

MySQL(二):主从复制结构、双主复制结构、利用SSL实现安全的MySQL主从复制

主从复制结构.双主复制结构.利用SSL实现安全的MySQL主从复制 一.主从复制结构实验 1.主服务器配置 可以先更改server_id 在/etc/my.cnf文件中,添加 server_id=11 重启服务,在mysql命令行查看 MariaDB [(none)]> select @@server_id; +-------------+ | @@server_id | +-------------+ |        11 | +-------------+ 1 row in set (0.

keepalived+mysql双主复制高可用方案

MySQL双主复制,即互为Master-Slave(只有一个Master提供写操作),可以实现数据库服务器的热备,但是一个Master宕机后不能实现动态切换.而Keepalived通过虚拟IP,实现了双主对外的统一接口以及自动检查.失败切换机制.联合使用,可以实现MySQL数据库的高可用方案. 实验环境:OS:centos 6.x x86_64系统MySQL版本: :mysql 5.6.22   64 位A: master :192.168.79.3 3306B: slave :192.168.

【转载】MySQL和Keepalived高可用双主复制

服务器主机IP和虚拟浮动IP配置 RealServer A 192.168.75.133 RealServer B 192.168.75.134 VIP A 192.168.75.110 VIP B 192.168.75.111 安装KeepAlived软件包 [[email protected] ~]# yum install keepalived =========================================================================

mysql 主从复制+双主复制

我们在使用MySQL Server数据库时,可能会遇到这种问题,如果其中一台mysql数据库宕掉后,我们希望以最短的时间进行解决,并尽快使用数据库,但是如果遇到一些无法快速修复的故障时,该怎么办呢? 我们可以设想,如果这是有另外一台和这个数据库一模一样的数据库时,问题就不一样了,怎么才可以实现实时,并自动的备份或者复制呢?   Mysql的主从复制: 1.主机安装好mysql服务后,首先修改my.cnf文件,添加两行,其中server id确保唯一 2.备机修改my.cnf 3.两台mysql重

mysql/mairadb双主复制

node5: 172.16.92.5/16 mariadb主服务器1node6: 172.16.92.6/16 mariadb主服务器2以上节点均为CentOS 7.1 配置环境1. 配置好光盘yum源2. 关闭selinux和iptables ============ 一. 安装mariadb-server并配置好文件 =========== node5: mariadb主服务器 [[email protected] ~]# yum -y install mariadb-server[[ema