Keepalived+LVS实现高可用负载均衡双主模式

LVS是一种集群(Cluster)技术:采用IP负载均衡技术和基于内容请求分发技术。调度器具有很好的吞吐率,将请求均衡地转移到不同的服务器上执行,且调度器自动屏蔽掉服务器的故障,从而将一组服务器构成一个高性能的、高可用的虚拟服务器。整个服务器集群的结构对客户是透明的,而且无需修改客户端和服务器端的程序。工作在四层,在内核空间工作,基于ipvs模块,不占用流量。

双机高可用方法目前分为两种:

1)双机主从模式:即前端使用两台服务器,一台主服务器和一台热备服务器,正常情况下,主服务器绑定一个公网虚拟IP,提供负载均衡服务,热备服务器处于空闲状态;当主服务器发生故障时,热备服务器接管主服务器的公网虚拟IP,提供负载均衡服务;但是热备服务器在主机器不出现故障的时候,永远处于浪费状态,对于服务器不多的网站,该方案不经济实惠。

2)双机主主模式:这种模式的效果很强大,即前端使用两台负载均衡服务器,互为主备,且都处于活动状态(这样达到不浪费服务器),同时各自绑定一个公网虚拟IP,提供负载均衡服务;当其中一台发生故障时,另一台接管发生故障服务器的公网虚拟IP(这时由非故障机器一台负担所有的请求)。这种方案,经济实惠,非常适合于当前架构环境。

一、环境介绍:

操作系统:

[[email protected]CentOS-4 ~]# cat /etc/RedHat-release

CentOS release 6.9 (Final)

服务器对应关系:

KA1:192.168.5.129 centos-1

KA2:192.168.5.128 centos-4

Vip1:192.168.5.200  129master/128backup

VIP2:192.168.5.210  128master/129backup

Web1:192.168.5.131 centos-2

Web2:192.168.5.132 centos-3

Client:192.168.5.140centos-5

二、环境安装:

安装依赖:

(在KA1和KA2机器上执行以下步骤)
[[email protected] ~]# yum -y install gcc pcre-devel zlib-devel openssl-devel

[[email protected]~]# cd /usr/local/src/
[[email protected] src]# wget http://nginx.org/download/nginx-1.9.7.tar.gz

安装nginx
[[email protected] src]# tar -zvxfnginx-1.9.7.tar.gz 
[[email protected] src]# cd nginx-1.9.7

[[email protected] nginx-1.9.7]#./configure --prefix=/usr/local/nginx --user=nginx --group=nginx--with-http_ssl_module --with-http_flv_module --with-http_stub_status_module--with-http_gzip_static_module --with-pcre

[[email protected] nginx-1.9.7]# make &&make install

[[email protected] ~]# yum install -ykeepalived

[[email protected] ~]# yum install –y ipvsadm

(在web1服务器和web2服务器上安装nginx)

[[email protected]~]# yum -y install gcc pcre-devel zlib-devel openssl-devel

[[email protected]~]# cd /usr/local/src/
[[email protected] src]# wget http://nginx.org/download/nginx-1.9.7.tar.gz

安装nginx
[[email protected] src]# tar -zvxfnginx-1.9.7.tar.gz 
[[email protected] src]# cd nginx-1.9.7

[[email protected] nginx-1.9.7]# ./configure --prefix=/usr/local/nginx --user=nginx --group=nginx--with-http_ssl_module --with-http_flv_module --with-http_stub_status_module--with-http_gzip_static_module --with-pcre

[[email protected] nginx-1.9.7]# make &&make install

三、配置服务:

(所以服务器上配置)

[[email protected] ~]# cat/etc/sysconfig/selinux

SELINUX=disabled

[[email protected] ~]# getenforce

Disabled

[[email protected] ~]# service iptables stop

1、配置keepalived:

(KA1上操作)

[[email protected] ~]#cat /etc/keepalived/keepalived.conf

! Configuration File forkeepalived

global_defs {

notification_email {

[email protected]

#failov[email protected]

#[email protected]

}

router_id LVS_DEVEL

}

vrrp_script chk_http_port {

script "/opt/check_nginx.sh"

interval 2

weight -5

fall 2

rise 1

}

vrrp_instance VI_1{

state MASTER

interface eth0

virtual_router_id 51

priority 100

advert_int 1

authentication {

auth_type PASS

auth_pass 1111

}

virtual_ipaddress {

192.168.5.200

}

}

vrrp_instance VI_2{

state BACKUP

interface eth0

virtual_router_id 50

priority 90

advert_int 1

authentication {

auth_type PASS

auth_pass 1111

}

virtual_ipaddress {

192.168.5.210

}

}

track_script {

chk_http_port

}

}

virtual_server192.168.5.200 80 {      # 定义转移ip端口80的集群服务

delay_loop 3

lb_algo rr

lb_kind DR

protocol TCP

sorry_server 127.0.0.1 80

real_server 192.168.5.131 80 {      # 定义集群服务包含的RS 1

weight 1                    # 权重为1

HTTP_GET {                  # 定义RS1的健康状态检测

url {

path /

status_code 200

}

connect_timeout 1

nb_get_retry 3

delay_before_retry 1

}

}

real_server 192.168.5.132 80 {      # 定义集群服务包含的RS 2

weight 1                      # 权重为1

HTTP_GET {                    # 定义RS2的健康状态检测

url {

path /

status_code 200

}

connect_timeout 1

nb_get_retry 3

delay_before_retry 1

}

}

}

virtual_server 192.168.5.21080 {      # 定义转移ip端口80的集群服务

delay_loop 3

lb_algo rr

lb_kind DR

protocol TCP

sorry_server 127.0.0.1 80

real_server 192.168.5.131 80 {      # 定义集群服务包含的RS 1

weight 1                    # 权重为1

HTTP_GET {                   # 定义RS1的健康状态检测

url {

path /

status_code 200

}

connect_timeout 1

nb_get_retry 3

delay_before_retry 1

}

}

real_server 192.168.5.132 80 {      # 定义集群服务包含的RS 2

weight 1                      # 权重为1

HTTP_GET {                    # 定义RS2的健康状态检测

url {

path /

status_code 200

}

connect_timeout 1

nb_get_retry 3

delay_before_retry 1

}

}

}

(KA2上操作)

[[email protected] ~]# cat/etc/keepalived/keepalived.conf

! Configuration File forkeepalived

global_defs {

notification_email {

[email protected]

#[email protected]

#[email protected]

}

router_id LVS_DEVEL

}

vrrp_script chk_http_port {

script "/opt/check_nginx.sh"

interval 2

weight -5

fall 2

rise 1

}

vrrp_instance VI_1{

state BACKUP

interface eth0

virtual_router_id 51

priority 100

advert_int 1

authentication {

auth_type PASS

auth_pass 1111

}

virtual_ipaddress {

192.168.5.200

}

}

vrrp_instance VI_2{

state MASTER

interface eth0

virtual_router_id 50

priority 90

advert_int 1

authentication {

auth_type PASS

auth_pass 1111

}

virtual_ipaddress {

192.168.5.210

}

}

track_script {

chk_http_port

}

}

virtual_server192.168.5.200 80 {      # 定义转移ip端口80的集群服务

delay_loop 3

lb_algo rr

lb_kind DR

protocol TCP

sorry_server 127.0.0.1 80

real_server 192.168.5.131 80 {      # 定义集群服务包含的RS 1

weight 1                    # 权重为1

HTTP_GET {                  # 定义RS1的健康状态检测

url {

path /

status_code 200

}

connect_timeout 1

nb_get_retry 3

delay_before_retry 1

}

}

real_server 192.168.5.132 80 {      # 定义集群服务包含的RS 2

weight 1                      # 权重为1

HTTP_GET {                    # 定义RS2的健康状态检测

url {

path /

status_code 200

}

connect_timeout 1

nb_get_retry 3

delay_before_retry 1

}

}

}

virtual_server192.168.5.210 80 {      # 定义转移ip端口80的集群服务

delay_loop 3

lb_algo rr

lb_kind DR

protocol TCP

sorry_server 127.0.0.1 80

real_server 192.168.5.131 80 {      # 定义集群服务包含的RS 1

weight 1                    # 权重为1

HTTP_GET {                  # 定义RS1的健康状态检测

url {

path /

status_code 200

}

connect_timeout 1

nb_get_retry 3

delay_before_retry 1

}

}

real_server 192.168.5.132 80 {      # 定义集群服务包含的RS 2

weight 1                      # 权重为1

HTTP_GET {                    # 定义RS2的健康状态检测

url {

path /

status_code 200

}

connect_timeout 1

nb_get_retry 3

delay_before_retry 1

}

}

}

编写一个监控nginx的脚本:

需要注意的是,要判断本机nginx是否正常,如果发现nginx不正常,重启之后,等待三秒在校验,任然失败则不尝试,关闭keepalived,发送邮件,其他主机此时接管VIP;

[[email protected]~]# cat /opt/check_nginx.sh

#!/bin/bash

check=$(ps-C nginx --no-heading | wc -l)

IP=`ipadd | grep eth0 | awk  ‘NR==2{print $2}‘| awk -F ‘/‘ ‘{print $1}‘`

if ["${check}" = "0" ]; then

/usr/local/nginx/sbin/nginx

sleep 2

counter=$(ps -C nginx --no-heading|wc -l)

if [ "${check}" = "0"]; then

/etc/init.d/keepalived stop

echo "check $IP nginx is down"| mail -s "check keepalived nginx" *********@qq.com

fi

fi

(KA1一样的监控脚本)

2、在后端两台web服务器上配置vip默认路由和配置两台服务器的nginx(这就不演示怎样配置nginx了。):

(考虑到方便执行就编写了一个脚本:在web1和web2服务器上配置。)

[[email protected] ~]# cat lvs.sh

#!/bin/bash

#realserver config vip config route arp

#legehappy

Vip1=192.168.5.200

Vip2=192.168.5.210

source /etc/rc.d/init.d/functions

case $1 in

start)

echo"config vip route arp" > /tmp/lvs1.txt

/sbin/ifconfiglo:0 $Vip1 broadcast $Vip1 netmask 255.255.255.255 up

/sbin/ifconfiglo:1 $Vip2 broadcast $Vip2 netmask 255.255.255.255 up

echo"1" > /proc/sys/net/ipv4/conf/lo/arp_ignore

echo"2" > /proc/sys/net/ipv4/conf/lo/arp_announce

echo"1" > /proc/sys/net/ipv4/conf/all/arp_ignore

echo"2" > /proc/sys/net/ipv4/conf/all/arp_announce

routeadd -host $Vip1 dev lo:0

routeadd -host $Vip2 dev lo:1

;;

stop)

echo "deletevip route arp" > /tmp/lvs2.txt

/sbin/ifconfig lo:0 down

echo"0" > /proc/sys/net/ipv4/conf/lo/arp_ignore

echo"0" > /proc/sys/net/ipv4/conf/lo/arp_announce

echo"0" > /proc/sys/net/ipv4/conf/all/arp_ignore

echo"0" > /proc/sys/net/ipv4/conf/all/arp_announce

routedel -host $Vip1 dev lo:0

routedel -host $Vip2 dev lo:1

;;

*)

echo"Usage: $0 (start | stop)"

exit 1

esac

(两台后端配置web服务nginx的页面信息)

[[email protected] ~]# curl 192.168.5.131

10.2

[[email protected] ~]# curl 192.168.5.132

10.3

3、在两台前端服务器上启动keepalived服务,对于192.168.5.200的vip centos-1是master/192.168.5.210的vip centos-1是backup。

[[email protected] ~]#service keepalived start

[[email protected] ~]# service keepalived start

查看日志文件:

[[email protected] ~]# cat /var/log/messages

Oct 19 22:00:22 centos-1 Keepalived_vrrp[46184]: VRRP_Instance(VI_2)Sending gratuitous ARPs on eth0 for 192.168.5.210

Oct 19 22:00:22 centos-1 Keepalived_healthcheckers[46183]: Netlinkreflector reports IP 192.168.5.210 added

Oct 19 22:00:24 centos-1 Keepalived_vrrp[46184]: VRRP_Instance(VI_1)Sending gratuitous ARPs on eth0 for 192.168.5.200

Oct 19 22:00:27 centos-1 Keepalived_vrrp[46184]: VRRP_Instance(VI_2)Sending gratuitous ARPs on eth0 for 192.168.5.210

(因为KA1先启动keepalived服务所以两个vip都会在KA1上,但第二台keepaliver服务起来后vip2就会被KA2抢占回来。)

[[email protected] ~]# cat /var/log/messages

Oct 19 22:01:38 centos-4 Keepalived_healthcheckers[15009]: Netlinkreflector reports IP 192.168.5.210 added

Oct 19 22:01:38 centos-4 avahi-daemon[1513]: Registering new addressrecord for 192.168.5.210 on eth0.IPv4.

Oct 19 22:01:38 centos-4 Keepalived_vrrp[15010]: VRRP_Instance(VI_2)Sending gratuitous ARPs on eth0 for 192.168.5.210

Oct 19 22:01:43 centos-4 Keepalived_vrrp[15010]: VRRP_Instance(VI_2)Sending gratuitous ARPs on eth0 for 192.168.5.210

查看ip addr:

[[email protected] keepalived]# ip add

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdiscpfifo_fast state UP qlen 1000

link/ether00:0c:29:0d:f3:5d brd ff:ff:ff:ff:ff:ff

inet 192.168.5.129/24 brd192.168.5.255 scope global eth0

inet 192.168.5.200/32scope global eth0

[[email protected] keepalived]#ip addr

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdiscpfifo_fast state UP qlen 1000

link/ether00:50:56:3a:84:30 brd ff:ff:ff:ff:ff:ff

inet 192.168.5.128/24 brd192.168.5.255 scope global eth0

inet 192.168.5.210/32 scope global eth0

(两台KA1和KA2服务器重启nginx、keepalived服务)

[[email protected]~]# /usr/local/nginx/sbin/nginx -t

nginx:the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok

nginx:configuration file /usr/local/nginx/conf/nginx.conf test is successful ###检查配置文件没问题后再执行重启nginx。

[[email protected]~]# /usr/local/nginx/sbin/nginx -s reload

[[email protected]~]# /usr/local/nginx/sbin/nginx -t

nginx:the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok

nginx:configuration file /usr/local/nginx/conf/nginx.conf test is successful

[[email protected]~]# /usr/local/nginx/sbin/nginx -s reload

[[email protected]~]# service keepalived restart

停止keepalived:                                          [确定]

正在启动keepalived:                                      [确定]

[[email protected]~]# service keepalived restart

停止keepalived:                                         [确定]

正在启动keepalived:                                      [确定]

四、测试:

验证方法(保证从负载均衡器本机到后端真实服务器之间能正常通信):

(1)、先测试完成后的效果访问vip1、vip2

Vip1:

[[email protected]~]# curl 192.168.5.200

10.2

[[email protected]~]# curl 192.168.5.200

10.3

[[email protected]~]# curl 192.168.5.200

10.2

[[email protected]~]# curl 192.168.5.200

10.3

Vip2:

[[email protected]os-5~]# curl 192.168.5.210

10.3

[[email protected]~]# curl 192.168.5.210

10.2

[[email protected]~]# curl 192.168.5.210

10.3

[[email protected]~]# curl 192.168.5.210

10.2

(2)、把KA1keepalived stop掉(模拟KA1主机的keepalived故障)

[[email protected] ~]# service keepalived stop

停止 keepalived:

[[email protected] ~]# ip addr

2: eth0:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen1000

link/ether 00:0c:29:0d:f3:5d brd ff:ff:ff:ff:ff:ff

inet 192.168.5.129/24 brd 192.168.5.255 scope global eth0

inet6 fe80::20c:29ff:fe0d:f35d/64 scope link

valid_lft forever preferred_lft forever

(KA1主机上查看ip addr已经没有vip了。)

在KA2主机上查看日志文件

[[email protected] ~]# cat /var/log/messages

Oct 19 23:20:46 centos-4Keepalived_vrrp[15412]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for192.168.5.200

Oct 19 23:20:46 centos-4avahi-daemon[1513]: Registering new address record for 192.168.5.200 oneth0.IPv4.

Oct 19 23:20:46 centos-4Keepalived_healthcheckers[15411]: Netlink reflector reports IP 192.168.5.200added

Oct 19 23:20:51 centos-4Keepalived_vrrp[15412]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for192.168.5.200

(日志文件显示已经把vip:192.168.5.200接管了)

查看KA2主机的ip addr

[[email protected] ~]# ip addr

2: eth0:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen1000

link/ether 00:50:56:3a:84:30 brd ff:ff:ff:ff:ff:ff

inet 192.168.5.128/24 brd 192.168.5.255 scope global eth0

inet 192.168.5.210/32 scope global eth0

inet 192.168.5.200/32 scope global eth0

(可以看到已经有两个vip)

检查nginx服务是否被KA2接管且不中断

[[email protected]~]# curl 192.168.5.200

10.3

[[email protected]~]# curl 192.168.5.200

10.2

[[email protected]~]# curl 192.168.5.210

10.3

[[email protected]~]# curl 192.168.5.210

10.2

原文地址:https://www.cnblogs.com/uestc2007/p/10734896.html

时间: 2024-10-10 01:47:45

Keepalived+LVS实现高可用负载均衡双主模式的相关文章

DNS+keepalived+lvs实现高可用负载均衡集群

1.目的: 部署两台lvs调度器并利用keepalived实现主主模型,DNS添加两条A记录指向lvs调度器实现轮询,最终实现高可用负载均衡集群效果 2.拓扑规划: 3.配置实践 (1)同步所有机器时间 在每台机器实行ntp同步: [[email protected] ~]# ntpdate 10.1.0.1 26 Oct 20:10:57 ntpdate[5533]: step time server 10.1.0.1 offset -32408.643564 sec (2)先配置RS,在RS

Keepalived+LVS(dr)高可用负载均衡集群的实现

一 环境介绍 1.操作系统CentOS Linux release 7.2.1511 (Core) 2.服务keepalived+lvs双主高可用负载均衡集群及LAMP应用keepalived-1.2.13-7.el7.x86_64ipvsadm-1.27-7.el7.x86_64httpd-2.4.6-45.el7.centos.x86_64mariadb-5.5.52-1.el7.x86_64php-5.4.16-42.el7.x86_64 二 原理及拓扑图 1.vrrp协议vrrp(Vir

keepalived+LVS搭建高可用负载均衡系统

相关架构设置: 1)vip : 192.168.137.6 2)DS master ip : 192.168.137.8 3)DS backup ip : 192.168.137.9 4)RS 1 ip: 192.168.137.100 5)RS 2 ip: 192.168.137.200 两台RS上的配置脚本:lvsrs [[email protected] init.d]# cat /etc/init.d/lvsrs #!/bin/sh vip=192.168.137.6 . /etc/rc

LVS+KeepAlived,RabbitMQ高可用负载均衡

最近团队准备对项目进行重构,其中用到了RabbitMQ,也考虑了几个方案,下边着重介绍在项目中即将采用的方案.关于RabbitMQ就不在这里详细说明,具体查看 RabbitMQ中文手册.直接看架构图: 如图所示: 前端采用keepalived+lvs实现高可用负载均衡, RabbitMQ HA 队列(镜像队列)进行消息队列结构复制.本方案中搭建两个节点,并且都是磁盘节点(所有节点状态保持一致,节点完全对等),只要有任何一个节点能够工作,RabbitMQ 集群对外就能提供服务.任务处理进程同时监控

LVS+Keepalived搭建MyCAT高可用负载均衡集群

LVS+Keepalived 介绍 LVS LVS是Linux Virtual Server的简写,意即Linux虚拟服务器,是一个虚拟的服务器集群系统.本项目在1998年5月由章文嵩博士成立,是中国国内最早出现的自由软件项目之一.目前有三种IP负载均衡技术(VS/NAT.VS/TUN和VS/DR),十种调度算法(rrr|wrr|lc|wlc|lblc|lblcr|dh|sh|sed|nq). Keepalvied Keepalived在这里主要用作RealServer的健康状态检查以及Mast

RHEL 5.4下部署LVS(DR)+keepalived实现高性能高可用负载均衡

原文地址:http://www.cnblogs.com/mchina/archive/2012/05/23/2514728.html 一.简介 LVS是Linux Virtual Server的简写,意即Linux虚拟服务器,是一个虚拟的服务器集群系统.本项目在1998年5月由章文嵩博士成立,是中国国内最早出现的自由软件项目之一. 目前有三种IP负载均衡技术(VS/NAT.VS/TUN和VS/DR):十种调度算法(rr|wrr|lc|wlc|lblc|lblcr|dh|sh|sed|nq). K

CentOS 6.3下部署LVS(NAT)+keepalived实现高性能高可用负载均衡

一.系统环境 实验拓扑: 实验环境: Vmware 9.01 + Windows 8 x64 企业版+8G内存 虚拟机步骤: 1.安装一台CentOS 6.3 x64主机,内存为1GB,网络为NAT模式,注意检查Vmware中EDIT菜单下Virtual Network Editor中VMnet8 2. 加电,安装系统.基础知识了,不再多说,注意:选择英文而不要选择中文,选择是Basic Server模式,系统名称:LVS-MASTER 3.安装系统后,用root用户登录进去,执行 ifconf

CentOS 6.3下部署LVS(NAT模式)+keepalived实现高性能高可用负载均衡

一.简介 VS/NAT原理图: 二.系统环境 实验拓扑: 系统平台:CentOS 6.3 Kernel:2.6.32-279.el6.i686 LVS版本:ipvsadm-1.26 keepalived版本:keepalived-1.2.4 三.安装 0.安装LVS前系统需要安装popt-static,kernel-devel,make,gcc,openssl-devel,lftp,libnl*,popt* 1.在两台Director Server上分别配置LVS+Keepalived LVS

CentOS 6.3下部署LVS(NAT)+keepalived实现高性能高可用负载均衡【转】

CentOS 6.3下部署LVS(NAT)+keepalived实现高性能高可用负载均衡 一.简介 VS/NAT原理图: 二.系统环境 实验拓扑: 系统平台:CentOS 6.3 Kernel:2.6.32-279.el6.i686 LVS版本:ipvsadm-1.26 keepalived版本:keepalived-1.2.4 三.安装 0.安装LVS前系统需要安装popt-static,kernel-devel,make,gcc,openssl-devel,lftp,libnl*,popt*