Mysql的写高可用,读的负载均衡

DRBD+MYSQL+HEARTBEAT+PACEMAKER+LVS+KEEPALIVE
说明:
 1. 是Mysql的高可用集群
 2. 通过Mysql的主从复制,实现mysql的读写分离。
 3. 集群资源管理是用的是pacemaker,对应的配置文件是cib.xml,而非旧版本的haresources。但haresources比cib.xml简单很多。
 4. 使用heartbeat实现Mysql主服务的高可用,keepalived实现从服务器的高可用。
###########架构简介############
##mysql主服务器+DRBD的主节点
 IP: 192.168.1.104——>drbd1
##mysql主服务器(备用)+DRBD的从节点
 IP: 192.168.1.105——>drbd2
##mysql从服务器(realserver)
 IP:192.168.1.106——>RS1
  192.168.1.107——>RS2
  192.168.1.107——>RS3
##LVS的DR+keepalived的主节点
 IP:192.168.1.109——>lvs1
##LVS的DR+keepalived的从节点 
 IP:192.168.1.110——>lvs2
##heartbeat所使用的VIP:
 IP:192.168.1.111——>写数据库时,所使用的VIP
##lvs+keepalived所使用的VIP:
 IP:192.168.1.112——>读数据库时,所使用的VIP
 
###########所需软件#############
1. drbd-8.4.3.tar.gz
2. mysql-5.5.28-linux2.6-x86_64.tar.gz9(二进制)
3. Reusable-Cluster-Components-glue--glue-1.0.9.tar.bz2
4. ClusterLabs-resource-agents-v3.9.2-0-ge261943.tar.gz
5. pacemaker_1.1.7.orig.tar.gz
6. keepalived-1.2.7-3.el6.x86_64.rpm
注:yum源的配置如下:
[local]
baseurl=file:///mnt
gpgcheck=0
[ha]
baseurl=file:///mnt/HighAvailability
gpgcheck=0

[LB]
baseurl=file:///mnt/LoadBalancer
gpgcheck=0

[server]
baseurl=file:///mnt/Server
gpgcheck=0
############Drbd的安装与配置#############
##Drbd的安装
1. tar xf drbd-8.4.3.tar.gz -C /usr/local/src
2. cd /usr/loca/src/drbd-8.4.3
3. ./configure \
 >--prefix=/usr/local/drbd \
 >--with-km \
 >--with-distro=redhat
 报错1:
 configure: error: Cannot build utils without flex, either install flex or pass the --without-utils option.
 解决方法:
 yum -y install flex
4. make && make install 
5. cp /usr/local/drbd/etc/rc.d/init.d/drbd /etc/init.d/drbd
6. chkconfig --add drbd
7. ln -sv /usr/local/drbd/etc/drbd.conf /etc/
8. ln -sv /usr/local/drbd/etc/drbd.d /etc/
9. modprobe drbd
注:以上操作在主从DRBD服务器上都需要做
##Drbd的配置
1. 用fdisk -c /dev/sdb 分出10G大小的/dev/sdb1
2. 配置/etc/drbd.d/global_common.conf
global {
        usage-count yes;
}

common {
        handlers {
                 pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
                 pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
                 local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
                 fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
                 split-brain "/usr/lib/drbd/notify-split-brain.sh root";
                 out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";
        }

startup {
                wfc-timeout 120;
                degr-wfc-timeout 120;
        }

disk {
                resync-rate     40M;
                on-io-error     detach;
                fencing         resource-only;
        }

net {
                protocol C;
                cram-hmac-alg   sha1;
                shared-secret   "mysql-ha";
                csums-alg       sha1;
                verify-alg crc32c;

}
}
3. 配置/etc/drbd.d/r0.res

resource r0 {
        device  /dev/drbd0;
        disk    /dev/sdb1;
        meta-disk       internal;

on drbd1 {
                address 192.168.1.104:7789;
        }
        on drbd2 {
                address 192.168.1.105:7789;
        }
}
4. drbdadm create-md  r0
5. service drbd start
注:以上说有操作在主从节点都要进行
6. drbdadm primary r0
报错1:
 0: State change failed: (-2) Need access to UpToDate data
Command ‘drbdsetup primary 0‘ terminated with exit code 17
 解决方法:
 drbdadm -- --overwrite-data-of-peer primary all
7. mkfs -t ext4 /dev/drbd0
8. mkdir /data
9. mount /dev/drbd0 /data
#################MySQL的安装与配置#################
##MySQL的安装
1. tar xf mysql-5.5.28-linux2.6-x86_64.tar.gz -C /usr/local
2. ln -sv mysql-5.5.28-linux2.6-x86_64.tar.gz mysql
3. cd mysql
4. groupadd mysql
5. useradd mysql -g mysql -s /sbin/nologin -M -r
6. chown -R mysql.mysql . 
7. chown -R mysql.mysql /data
8. scripts/mysql_install_db --user=mysql --datadir=/data 
9. chown -R root .
10. cp support-files/mysql.server /etc/init.d/mysqld
11. cp support-files/my-large.cnf /data/my.cnf
12. ln -sv /data/my.cnf /etc/
13. ./bin/mysqld_safe --user=mysql &
14. 修改/etc/my.cnf
 [mysqld]
 datadir=/data
15. echo "PATH=$PATH:/usr/local/mysql/bin" > /etc/profile.d/mysql.sh
16. . /etc/profile
17. 修改/etc/init.d/mysqld
 datadir=/data  
注:以上同样的操作也要在mysql的三个从节点上进行,另外,在备用主节点上,除了8、11、13步骤外,其余的需要做一遍。
##MySQL的配置
主节点:
 1.修改my.cnf
  [mysqld]
  server-id=11
  log-bin=mysql-bin #默认是开启的
  sync-binlog=1
  innodb-file-per-table=1
 2. 创建从服务器拷贝账号
  grant replication client, replication slave on *.* to ‘repl‘@‘192.168.1.106/107/108‘ identified by ‘123‘;
从节点:
 1.修改my.cnf
  [mysqld]
  server-id=12 #另外两台分别是13,14
  read-only=1
  relay-log=relay-bin
  innodb-file-per-table=1
 2. 设置所要连接的主服务器以及拷贝的文件和所在位置
  mysql> change master to 
  -> master_host=‘192.168.1.111‘,
  -> master_user=‘repl‘,
  -> master_password=‘123‘,
  -> master_port=3306,
 3. start slave;
 4. show slave status; #查看是否配置成功。

##################Heartbeat-HA的安装################
>>>>>>>>>>>>定义环境变量<<<<<<<<<<<<<<<<
export PREFIX=/usr/local/heartbeat
export LCRSODIR=$PREFIX/libexec/lcrso
export CLUSTER_USER=hacluster
export CLUSTER_GROUP=haclient
export CFLAGS="$CFLAGS -I$PREFIX/include -L$PREFIX/lib64 -L$PREFIX/lib"
将上述变量直接运行,或加到/root/.bash_profile中

>>>>>>>>>>Reusable-Cluster-Components-glue--glue-1.0.9.tar.bz2的安装<<<<<<
1. groupadd -r haclient
2. useradd hacluster -g haclient -r -M -s /sbin/nologin
3. tar xf Reusable-Cluster-Components-glue--glue-1.0.9.tar.bz2
4. ./autogen.sh  #注,运行此脚本时,提示安装autoconf、automake、libtool。因此,需要安装这两个软件:yum -y install autoconf automake libtool
另外,在此脚本运行期间,会提示libtoolize: `COPYING.LIB‘ not found in `/usr/share/libtool/libltdl‘,需要安装libtool-ltdl-devel
 yum -y install libtool-ltdl-devel
 此处,不安装libtool-ltdl-devel软件的话,脚本也会运行成功,但在make的时候会有如下错误:
 gmake[1]: Entering directory `/usr/local/src/Reusable-Cluster-Components-glue--glue-1.0.9/libltdl‘
gmake[1]: *** No rule to make target `all‘.  Stop.
gmake[1]: Leaving directory `/usr/local/src/Reusable-Cluster-Components-glue--glue-1.0.9/libltdl‘
make: *** [all-recursive] Error 1
小结:
 综上说述,在运行autgen.sh脚本之前,先要安装
 yum -y install autoconf automake libtool libtool-ltdl-devel gettext 五个个软件。
5. ./configure --prefix=$PREFIX --enable-fatal-warnings=no --with-daemon-user=$CLUSTER_USER --with-daemon-group=$CLUSTER_GROUP  --with-ocf-root=$PREFIX
 #注,@@此处会有错误提示,需要安装glib2-devel、libxml2,则需要安装:
 yum -y install glib2-devel libxml2-devel
  @@还会提示:configure: error: BZ2 libraries not found,则需要安装bzip2-devel:
 yum -y install bzip2-devel
6. make 
#注,此处会报错1:

./.libs/libplumb.so: undefined reference to `uuid_parse‘
./.libs/libplumb.so: undefined reference to `uuid_generate‘
./.libs/libplumb.so: undefined reference to `uuid_copy‘
./.libs/libplumb.so: undefined reference to `uuid_is_null‘
./.libs/libplumb.so: undefined reference to `uuid_unparse‘
./.libs/libplumb.so: undefined reference to `uuid_clear‘
./.libs/libplumb.so: undefined reference to `uuid_compare‘
collect2: ld returned 1 exit status
gmake[2]: *** [ipctest] Error 1
gmake[2]: Leaving directory `/usr/local/src/Reusable-Cluster-Components-glue--glue-1.0.9/lib/clplumbing‘
gmake[1]: *** [all-recursive] Error 1
gmake[1]: Leaving directory `/usr/local/src/Reusable-Cluster-Components-glue--glue-1.0.9/lib‘
make: *** [all-recursive] Error 1
 解决方法:
 需要安装libuuid-devel,不过安装后还需要重新运行第5个步骤:
 yum -y install libuuid-devel
报错2:
gmake[2]: *** [hb_report.8] Error 4
gmake[2]: Leaving directory `/usr/local/src/Reusable-Cluster-Components-glue--glue-1.0.9/doc‘
gmake[1]: *** [all-recursive] Error 1
gmake[1]: Leaving directory `/usr/local/src/Reusable-Cluster-Components-glue--glue-1.0.9/doc‘
make: *** [all-recursive] Error 1
需要安装,yum -y install docbook-style-xsl
7. make install
8. echo /usr/local/heartbeat/lib >> /etc/ld.so.conf.d/heartbeat.conf
9.  echo /usr/local/heartbeat/lib64 >> /etc/ld.so.conf.d/heartbeat.conf
9. ldconfig
小结:
 安装cluster-glue前,先安装下列软件:
 yum -y install autoconf automake libtool libtool-ltdl-devel gettext glib2-devel libxml2-devel bzip2-devel libuuid-devel docbook-style-xsl
 
>>>>>>>>>>>>>>heartbeat-3-0-7e3a82377fa8.tar.bz2的安装<<<<<<<<<<<<<<<
1. tar xf Heartbeat-3-0-7e3a82377fa8.tar.bz2
2. ./bootstrap
3. ./configure --prefix=$PREFIX --enable-fatal-warnings=no  
#报错1:
configure: error: Core development headers were not found
这是因为无法找到头文件引起的。因此需要明确指定头文件所在路径:
CFLAGS=-I/usr/local/heartbeat/include
报错2:
gmake[2]: *** [api_test] Error 1
gmake[2]: Leaving directory `/usr/local/src/Heartbeat-3-0-7e3a82377fa8/lib/hbclient‘
gmake[1]: *** [all-recursive] Error 1
gmake[1]: Leaving directory `/usr/local/src/Heartbeat-3-0-7e3a82377fa8/lib‘
make: *** [all-recursive] Error 1
这是因为无法找到库文件造成的。因此需要明确指定库文件所在的路径:
LDFLAGS=-L/usr/local/heartbeat/lib

报错3:
 In file included from ../include/lha_internal.h:41,
                 from strlcpy.c:1:
/usr/local/heartbeat/include/heartbeat/glue_config.h:105:1: error: "HA_HBCONF_DIR" redefined
In file included from ../include/lha_internal.h:38,
                 from strlcpy.c:1:
../include/config.h:390:1: error: this is the location of the previous definition
gmake[1]: *** [strlcpy.lo] Error 1
gmake[1]: Leaving directory `/usr/local/src/Heartbeat-3-0-7e3a82377fa8/replace‘
make: *** [all-recursive] Error 1
 解决方法:
 将 /usr/local/heartbeat/include/heartbeat/glue_config.h 的105行删除或注释掉。
 
4. make && make install 
小结:综上所述,如果进行源码安装的时候,若是自己指定软件安装的路径的话,则最好使用以下命令来明确指定头文件和库文件的路径,如上例:
CFLAGS=-I/usr/local/heartbeat/include 
LDFLAGS=-L/usr/local/heartbeat/lib

>>>>>>>>>>>>ClusterLabs-resource-agents-v3.9.2-0-ge261943.tar.gz的安装<<<<
1. tar xf ClusterLabs-resource-agents-v3.9.2-0-ge261943.tar.gz
2. 修改configure.ac 文件:

OCF_RA_DIR_PREFIX="${prefix}/$OCF_RA_DIR"替换为:OCF_RA_DIR_PREFIX="$OCF_RA_DIR"
OCF_LIB_DIR_PREFIX="${prefix}/$OCF_LIB_DIR"替换为:
OCF_LIB_DIR_PREFIX="$OCF_LIB_DIR"

3. ./autogen.sh
4. ./configure  \
--prefix=$PREFIX \
--enable-fatal-warnings=no  \

make
5. make && make install
报错1:
/heartbeat/IPv6addr: error while loading shared libraries: libplumb.so.2: cannot open shared object file: No such file or directory
gmake[2]: *** [metadata-IPv6addr.xml] Error 127
 解决方法:
 1. echo /usr/local/heartbeat/lib >> /etc/ld.so.conf.d/heartbeat.conf
 2. ldconfig
 3. 然后重新编译

>>>>>>>>>>>>>>>Pacemaker的安装 <<<<<<<<<<<<<<<<<<
1. tar xf pacemaker_1.1.7.orig.tar.gz
2. ./autogen.sh
3. ./configure --prefix=$PREFIX --enable-fatal-warnings=no 
注:以上操作在drbd主从节点上均要进行

报错1:
configure: error: The libxslt developement headers were not found
 解决方法:
 yum -y install libxslt-devel
报错2:
checking for cpg... configure: error: Package requirements (libcpg) were not met:  No package ‘libcpg‘ found
 解放方法:
 yum -y install corosynclib-devel
4. make && make install
5. echo "PATH=$PATH:/usr/local/heartbeat/sbin:/usr/local/heartbeat/bin" >>/etc/profile.d/heartbeat.sh
6. . /etc/profile.d/heartbeat.sh

错误1:做完以上操作后,除了crm命令外,其他的命令如:crm_inode、crm_report等等都可以正常使用,但使用crm时报以下错误:
abort: couldn‘t find crm libraries in [/usr/local/heartbeat/sbin /usr/local/heartbeat/lib64/python2.6 /root /usr/lib64/python26.zip /usr/lib64/python2.6 /usr/lib64/python2.6/plat-linux2 /usr/lib64/python2.6/lib-tk /usr/lib64/python2.6/lib-old /usr/lib64/python2.6/lib-dynload /usr/lib64/python2.6/site-packages /usr/lib64/python2.6/site-packages/PIL /usr/lib64/python2.6/site-packages/gst-0.10 /usr/lib64/python2.6/site-packages/gtk-2.0 /usr/lib64/python2.6/site-packages/webkit-1.0 /usr/lib/python2.6/site-packages /usr/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg-info]
(check your install and PYTHONPATH)
 解决方法:
 1. echo "export PYTHONPATH=/usr/local/heartbeat/lib64/python2.6/site-packages" >>/etc/profile.d/heartbeat.sh
 2. . /etc/profile.d/heartbeat.sh
########################Heartbeat的配置######################
1. cd /usr/local/heartbeat/share/doc/heartbeat
2. cp ha.cf haresources authkeys /usr/local/heartbeat/etc/ha.d
3. cd /usr/local/heartbeat/etc/ha.d
4. chmod 600 authkeys
5. vim /etc/hosts
 192.168.1.104 drbd1
 192.168.1.105 drbd2
6. vim ha.cf
autojoin none
bcast eth0
warntime 15
deadtime 60
initdead 120
keepalive 2
compression bz2
compression_threshold 2
debug 0
node drbd1
node drbd2
pacemaker respawn
7. vim authkeys
 auth 1
 1 crc
8.  service heartbeat start
报错1:
/usr/local/heartbeat/etc/ha.d/shellfuncs: line 96: /usr/lib/ocf/lib//heartbeat/ocf-shellfuncs: No such file or directory
解决方法:
 修改/usr/local/heartbeat/etc/ha.d/shellfuncs:
 . /usr/local/heartbeat/usr/lib/ocf/lib//heartbeat/ocf-shellfuncs 
 
报错2:
Starting High-Availability services:  Heartbeat failure [rc=6]. Failed.

heartbeat[5175]: 2013/06/14_00:24:39 ERROR: Illegal directive [bcast] in /usr/local/heartbeat/etc/ha.d//ha.cf
heartbeat[5175]: 2013/06/14_00:24:39 ERROR: Compression module(bz2) not found
heartbeat[5175]: 2013/06/14_00:24:39 info: Pacemaker support: respawn
heartbeat[5175]: 2013/06/14_00:24:39 WARN: File /usr/local/heartbeat/etc/ha.d//haresources exists.
heartbeat[5175]: 2013/06/14_00:24:39 WARN: This file is not used because pacemaker is enabled
heartbeat[5175]: 2013/06/14_00:24:39 ERROR: Client child command [/usr/local/heartbeat/lib/heartbeat/ccm] is not executable
heartbeat[5175]: 2013/06/14_00:24:39 ERROR: Directive respawn  hacluster /usr/local/heartbeat/lib/heartbeat/ccm failed
heartbeat[5175]: 2013/06/14_00:24:39 ERROR: Heartbeat not started: configuration error.
heartbeat[5175]: 2013/06/14_00:24:39 ERROR: Configuration error, heartbeat not started.

解决方法:
 
 1. ln -svf /usr/local/heartbeat/lib64/heartbeat/ccm /usr/local/heartbeat/lib/heartbeat/
 2. ln -svf /usr/local/heartbeat/lib64/heartbeat/plugins/RAExec/* /usr/local/heartbeat/lib/heartbeat/plugins/RAExec/
 3. ln -svf /usr/local/heartbeat/lib64/heartbeat/plugins/* /usr/local/heartbeat/lib/heartbeat/plugins/

9. chkconfig heartbeat on ; checonfig logd on
######################配置pacemaker#############################
1. 配置如下:
使用如下命令:crm configure show 显示配置如下

node $id="97ae394b-5f7c-472c-85a7-8e22de0c656b" drbd2 \
  attributes standby="off"
 node $id="e0c675cd-57aa-4975-b36c-8564c13c714a" drbd1 \
  attributes standby="off"
 primitive drbd_r0 ocf:heartbeat:drbd \
  params drbd_resource="r0" \
  op monitor interval="30s" role="Master" \
  op start interval="0" timeout="240s" \
  op stop interval="0" timeout="100s"
 primitive fs ocf:heartbeat:Filesystem \
  params device="/dev/drbd0" directory="/data" fstype="ext4" \
  op start interval="0" timeout="60s" \
  op stop interval="0" timeout="60s" \
  meta target-role="Started"
 primitive myip ocf:heartbeat:IPaddr \
  params ip="192.168.1.111"
 primitive mysql lsb:mysqld
 group mysqlservice fs myip mysql
 ms ms_drbd_mysql drbd_r0 \
  meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
 colocation fs_with_drbd_r0 inf: mysqlservice ms_drbd_mysql:Master
 colocation mysql_on_drbd_master inf: mysql ms_drbd_mysql:Master
 order fs_after_drbd inf: ms_drbd_mysql:promote fs:start
 order mysql_after_fs inf: fs:start mysql:start
 property $id="cib-bootstrap-options" \
  dc-version="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" \
  cluster-infrastructure="Heartbeat" \
  no-quorum-policy="ignore" \
  stonith-enabled="false" \
  last-lrm-refresh="1371372103" \
  expected-quorum-votes="2"

###################Keepalived+LVS的安装与配置###########################
>>>>>>>>>>>>>>>>>>>>>安装<<<<<<<<<<<<<<<<<<<<<<
1. yum -y localinstall keepalived-1.2.7-3.el6.x86_64.rpm ipvsadm

注:以上操作在服务器lvs1和lvs2上同时安装
>>>>>>>>>>>>>>>>>>>>>配置<<<<<<<<<<<<<<<<<<<<<<<<<<<
1. cd /etc/keepalived
2. vim keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id LVS_master #此值可任意设定,不过主备节点最好有所区别
}

vrrp_instance VI_1 {
    state MASTER   #lvs2改为BACKUP
    interface eth0
    virtual_router_id 51
    priority 110 #lvs2的比值要比110小
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }   
    virtual_ipaddress {
        192.168.1.112
    }   
}

virtual_server 192.168.1.112 3306 {
    delay_loop 6
    lb_algo rr
    lb_kind dr
    persistence_timeout 50
    protocol TCP

real_server 192.168.1.106 3306 {
        MISC_CHECK { 
                misc_path "/etc/keepalived/check_slave.sh"
                misc_dynamic
        }    
    }

real_server 192.168.1.106 3306 {
        MISC_CHECK {
                misc_path "/etc/keepalived/check_slave.sh 192.168.1.106"
                misc_dynamic
        }
    }

real_server 192.168.1.107 3306 {
        MISC_CHECK {
                misc_path "/etc/keepalived/check_slave.sh 192.168.1.107"
                misc_dynamic
        }
    }

real_server 192.168.1.108 3306 {
        MISC_CHECK {
                misc_path "/etc/keepalived/check_slave.sh 192.168.1.108"
                misc_dynamic
        }
    }
}
3.编写check_slave.sh监控脚本
 #!/usr/bin/perl -w
#connect mysql with perl

use DBI;
use DBD::mysql;

$host=$ARGV[0];
$user="root";
$pw="123";
$port=3306;
$db="test";
$SBM=120;

$dbh = DBI->connect("DBI:mysql:$db:$host:$port", $user, $pw, {RaiseError => 0, PrintError => 0});

if (!defined($dbh)) {
        exit 1;
}

$slaveStatus = $dbh->prepare("show slave status");
$slaveStatus->execute;

$io = "";
$sql = "";
$sbm = "";

while (my $ref = $slaveStatus->fetchrow_hashref()){
        $io = $ref->{‘Slave_IO_Running‘};
        $sql = $ref->{‘Slave_SQL_Running‘};
        $sbm = $ref->{‘Seconds_Behind_Master‘};
}

$slaveStatus->finish;
$dbh->disconnect();

if ( $io eq "No" || $sql eq "No") {
        exit 1;
}
else{
        if ( $sbm > $SBM ) {
                exit 1;
        }
        else {
                exit 0;
        }
}

4. 在RS1、RS2、RS3的mysql中添加如下账号,其中RS_IP是realserver的IP地址:
 grant replication client  on *.* to ‘root‘@‘RS_IP‘ identified by ‘123‘;
5. 在RS1、RS2、RS3中编写lvs控制脚本

vim /etc/init.t/lvsrs
 #!/bin/bash
 #
 #chkconfig: 35 70 50
 #
 vip=192.168.1.100
 lo=lo:0
 retval=0
 start() {
  ifconfig $lo $vip netmask 255.255.255.255 up
  route add -host $vip dev $lo
  echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
  echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
  echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
  echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
  }
  
 stop() {

ifconfig $lo down
  route del -host $vip
  echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
  echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
  echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
  echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
 }

case $1 in
 start)
   start
   retval=$?
   [ $retval = 0 ] && echo "Starting lvs OK"
   ;;
 stop)
   stop
   retval=$?
   [ $retval = 0 ] && echo "Starting lvs Failed"
   ;;

*)
   echo "Usage: $0 {start|stop}"
   exit 1
   ;;
 esac
 exit 0
6. chmod +x /etc/keepalived/check_slave.sh
7. chmod +x /etc/init.d/lvsrs
8. chkconfig --add lvsrs
9. /etc/init.d/lvsrs start

时间: 2024-11-11 19:56:17

Mysql的写高可用,读的负载均衡的相关文章

基于keepalived+nginx部署强健的高可用7层负载均衡方案20151214

高可用是个老生常谈的问题了,开源的高可用软件已经做的相当成熟了,之前也在debian下做过lvs+heartbeat的4层LB,一直很稳定(可惜流量不大啊),现在由于业务的需要,做一个基于keepalived+nginx的高可用7层负载均衡. 拓扑结构也比较简单,就不画拓扑图了:2个节点上分别安装配置keepalived和nginx,配置nginx反向代理后端的real server 比较关键的几个点: 1.为避免同一个局域网中有多个keepalived组中的多播相互响应,采用单播通信 2.状态

构建高可用的LVS负载均衡集群 入门篇

一.LVS简介 LVS是Linux Virtual Server的简称,也就是Linux虚拟服务器, 是一个由章文嵩博士发起的自由软件项目,它的官方站点是www.linuxvirtualserver.org.现在LVS已经是 Linux标准内核的一部分,在Linux2.4内核以前,使用LVS时必须要重新编译内核以支持LVS功能模块,但是从Linux2.4内核以后,已经完全内置了LVS的各个功能模块,无需给内核打任何补丁,可以直接使用LVS提供的各种功能. LVS 集群采用IP负载和基于内容请求分

构建高可用的LVS负载均衡集群 进阶篇

一.lvs组件介绍 lvs的组件由两部分组成:工作在内核空间的ipvs模块和工作在用户空间ipvsadm工具.其中ipvsadm是规则生成工具,而ipvs是一个使规则生效的工具. 二.ipvsadm详解 构建高可用的LVS负载均衡集群 进阶篇,布布扣,bubuko.com

通过keepalived搭建高可用的LVS负载均衡集群

一.keepalived软件简介 keepalived是基于vrrp协议实现高可用功能的一种软件,它可以解决单点故障的问题,通过keepalived搭建一个高可用的LVS负载均衡集群时,keepalived还能检测后台服务器的运行状态. 二.vrrp协议原理简介 vrrp(虚拟路由器冗余协议),是为了解决网络上静态路由出现的单点故障的问题,举个例子,如下图 主机A和B均在同一个局域网内,C和D均是该局域网的网关,即A和B想与外网通信,需指网关到C或D,那究竟指向C好还是指向D好呢?都不好!当指向

实践:在CentOS7.3部署 keepalived 高可用nginx(负载均衡)

背景: 既然有了Lvs+keepalived这样高性能的组合,那为何还要有Nginx+keepalived呢,keepalived的初衷就是为了Lvs而设计的,我们都知道Lvs是一个四层的负载均衡设备,虽然有着高性能的优势,但同时它却没有后端服务器的健康检查机制,keepalived为lvs设计了一系列的健康检查机制TCP_CHECK,UDP_CHECK,HTTP_GET等.同时lvs也可以自己写健康检查脚脚本.或者结合ldirectory来实现后端检测.但是固LVS始终无法摆脱它是一个四层设备

drbd+mariadb+corosync+pacemaker构建高可用,实现负载均衡

DRBD DRBD是由内核模块和相关脚本而构成,用以构建高可用性的集群 drbd 工作原理:DRBD是一种块设备,可以被用于高可用(HA)之中.它类似于一个网络RAID-1功能. 当你将数据写入本地 文件系统时,数据还将会被发送到网络中另一台主机上.以相同的形式记录在一个 文件系统中. 本地(主节点)与远程主机(备节点)的数据可以保证实时同步.当本地系统出现故障时,远 程主机上还会保留有一份相同的数据,可以继续使用.在高可用(HA)中使用DRBD功能,可以代替使用一 个共享盘阵.因为数据同时存在

keepalived+LVS实现高可用的Web负载均衡

数据流架构图: 一.测试环境 主机名 ip vip lvs01 192.168.137.150 192.168.137.80 lvs02 192.168.137.130 web01 192.168.137.128 -- web02 192.168.137.134 -- 二.安装配置lvs.keepalived 1.分别在lvs01,lvs02主机上安装ipvsadm keepalived yum install ipvsadm keepalived -y Installed: ipvsadm.x

lvs+keepalived实现高可用的web负载均衡

拓扑图 安装keepalived [[email protected] ~]# yum install -y keepalived 修改keepalived MASTER    配置文件 [[email protected] ~]# vim /etc/keepalived/keepalived.conf  ! Configuration File for keepalived   global_defs {    notification_email {      [email protecte

客户需求:nginx + keepalive 实现高可用 +后端的负载均衡

突然接到客户需求,需要使用nginx + keepalive 实现HA + LB,下面是主要的配置内容: 软件包直接从官网获取 操作系统环境:RHEL 6.8 X86_64 版本号:nginx-自定 keepalived-1.2.13-5.el6_6.x86_64.rpm IP地址分配:VIP: 192.168.56.10/24 RIP: 192.168.56.11/24  192.168.56.12/24 安装软件 yum -y install nginx keepalived #将nginx