LVS/DR + keepalived负载均衡高可用实现

一、keepalived简介

keepalived是分布式部署解决系统高可用的软件,结合lvs(LinuxVirtual Server)使用,解决单机宕机的问题。
keepalived是一个基于VRRP协议来实现IPVS的高可用的解决方案。对于LVS负载均衡来说,如果前端的调度器direct发生故障,则后端的realserver是无法接受请求并响应的。因此,保证前端direct的高可用性是非常关键的,否则后端的服务器是无法进行服务的。而我们的keepalived就可以用来解决单点故障(如LVS的前端direct故障)问题。keepalived的主要工作原理是:运行keepalived的两台服务器,其中一台为MASTER,另一台为BACKUP,正常情况下,所有的数据转换功能和ARP请求响应都是由MASTER完成的,一旦MASTER发生故障,则BACKUP会马上接管MASTER的工作,这种切换时非常迅速的。

二、测试环境

下面拿4台虚拟机进行环境测试,实验环境为centos6.6 x86_64,具体用途和ip如下


服务器类型


IP地址


Lvs VIP


192.168.214.89


Keepalived Master


192.168.214.85


Keepalived Backup


192.168.214.86


            Realserver A


192.168.214.87


            Realserver B


192.168.214.88

 

三、软件安装

1、安装lvs所需包ipvsadm

yum install -y ipvsadm

ln -s /usr/src/kernels/`uname -r` /usr/src/linux

lsmod |grep ip_vs

#注意Centos 6.X安装lvs,使用1.26版本。并且需要先安装yuminstall libnl* popt* -y

执行ipvsadm(modprobe ip_vs)把ip_vs模块加载到内核

[[email protected] ~]# ipvsadm -L -n

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port          Forward Weight ActiveConn InActConn

#IP Virtual Server version 1.2.1 ---- ip_vs内核模块版本

2、安装keepalived

yum install -y keepalived

chkconfig keepalived on

注:在centos7系列系统中开机自动启动使用systemctl enable keepalived

四、keepalived配置

先看下214.85keepalived主机上的配置

[[email protected] ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
#配置报警邮箱
   notification_email {
     [email protected]
   }
   notification_email_from [email protected]
   smtp_server mail.test.com
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}
vrrp_sync_group VG1 {
   group {
      VI_1
   }
}

vrrp_instance VI_1 {
    state MASTER  #指定keepalived的角色,MASTER表示此主机是主服务器,BACKUP表示此主机是备用服务器
    interface eth0  #指定HA监测网络的接口
    lvs_sync_daemon_inteface eth0
virtual_router_id 55
#虚拟路由标识,这个标识是一个数字,同一个vrrp实例使用唯一的标识。即同一vrrp_instance下,MASTER和BACKUP必须是一致的
    priority 100 #定义优先级,数字越大,优先级越高,在同一个vrrp_instance下,MASTER的优先级必须大于BACKUP的优先级
    advert_int 1 #设定MASTER与BACKUP负载均衡器之间同步检查的时间间隔,单位是秒
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {    #设置虚拟IP地址
        192.168.214.89  
    }
}

virtual_server 192.168.214.89 80 {
    delay_loop 6          #(每隔6秒查询realserver状态)
    lb_algo rr             #(lvs 算法)
    lb_kind DR            #(使用lvs的DR模式)
    #nat_mask 255.255.255.0
    persistence_timeout 10  #(同一IP的连接10秒内被分配到同一台realserver)
    protocol TCP  #(用TCP协议检查realserver状态)

real_server 192.168.214.87 80 {
        weight 100          #(权重)
       TCP_CHECK {
            connect_timeout 3   #(3秒无响应超时)
            connect_port 80
            nb_get_retry 3
            delay_before_retry 3
        }
    }
    real_server 192.168.214.88 80 {
        weight 100
       TCP_CHECK {
            connect_timeout 3
            connect_port 80
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

再看下214.86keepalived备机上的配置

! Configuration File for keepalived

global_defs {
   notification_email {
     [email protected]
   }
   notification_email_from [email protected]
   smtp_server mail.test.com
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}
vrrp_sync_group VG1 {
   group {
      VI_1
   }
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    lvs_sync_daemon_inteface eth0
    virtual_router_id 55
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.214.89
    }
}

virtual_server 192.168.214.89 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    #nat_mask 255.255.255.0
    persistence_timeout 10
    protocol TCP

real_server 192.168.214.87 80 {
        weight 100
       TCP_CHECK {
            connect_timeout 3
            connect_port 80
            nb_get_retry 3
            delay_before_retry 3
        }
    }
    real_server 192.168.214.88 80 {
        weight 100
       TCP_CHECK {
            connect_timeout 3
            connect_port 80
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

五、后端realserver操作

DR模式需要在后端真实机上运行以下脚本

#!/bin/bash
# description: Config realserver lo
#Written by :Charles

VIP1=192.168.214.89
. /etc/rc.d/init.d/functions

case “$1” in
start)
       ifconfig lo:0 $VIP1 netmask 255.255.255.255 broadcast $VIP1
       /sbin/route add –host $VIP1 dev lo:0
       echo “1” >/proc/sys/net/ipv4/conf/lo/arp_ignore
       echo “2” >/proc/sys/net/ipv4/conf/lo/arp_announce
       echo “1” >/proc/sys/net/ipv4/conf/all/arp_ignore
       echo “2” >/proc/sys/net/ipv4/conf/all/arp_announce
       sysctl –p >/dev/null 2>&1
       echo “RealServer Start OK”
       ;;
stop)
       ifconfig lo:0 down
       route del $VIP1 >/dev/null 2>&1
       echo “0” >/proc/sys/net/ipv4/conf/lo/arp_ignore
       echo “0” >/proc/sys/net/ipv4/conf/lo/arp_announce
       echo “0” >/proc/sys/net/ipv4/conf/all/arp_ignore
       echo “0” >/proc/sys/net/ipv4/conf/all/arp_announce
       echo “RealServer Stoped”
       ;;
*)
       echo “Usage: $0 {start|stop}”
       exit 1
esac

exit 0

#执行realserver.sh start开启,stop关闭

#脚本设置成755权限,并放入rc.local下让其开机启动运行

六、启动keepalived服务及查看相关信息

在214.85和214.86上分别启动keepalived服务

在214.85keepalived主机上查看信息

通过ip addr可以看到vip 地址已经绑定在eth0网口上

[[email protected] ~]# ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu65536 qdisc noqueue state UNKNOWN

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

inet6 ::1/128 scope host

valid_lft forever preferred_lft forever

2: eth0:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen1000

link/ether 00:0c:29:85:7a:67 brd ff:ff:ff:ff:ff:ff

inet 192.168.214.85/24 brd 192.168.214.255 scope global eth0

inet 192.168.214.89/32 scope global eth0

inet6 fe80::20c:29ff:fe85:7a67/64 scope link

valid_lft forever preferred_lft forever

在214.85上查看日志信息,看到已成功进入keepalived主机模式

[[email protected] ~]# tail -f /var/log/messages

May 4 14:12:34 test85 Keepalived_vrrp[7977]: VRRP_Instance(VI_1) Entering MASTER STATE

May 4 14:12:34 test85 Keepalived_vrrp[7977]: VRRP_Instance(VI_1) settingprotocol VIPs.

May 4 14:12:34 test85 Keepalived_healthcheckers[7975]: Netlink reflector reports IP 192.168.214.89 added

May 4 14:12:34 test85 Keepalived_vrrp[7977]: VRRP_Instance(VI_1) Sendinggratuitous ARPs on eth0 for 192.168.214.89

May 4 14:12:34 test85 Keepalived_vrrp[7977]: VRRP_Group(VG1) Syncinginstances to MASTER state

May 4 14:12:36 test85 ntpd[1148]: Listen normally on 7 eth0 192.168.214.89UDP 123

May 4 14:12:36 test85 ntpd[1148]: peers refreshed

May 4 14:12:39 test85 Keepalived_vrrp[7977]: VRRP_Instance(VI_1) Sendinggratuitous ARPs on eth0 for 192.168.214.89

May 4 14:12:40 test85 root[7924] 192.168.5.80 53823 192.168.214.85 22:#1525414360

May 4 14:12:40 test85 root[7924] 192.168.5.80 53823 192.168.214.85 22: ipaddr

在214.86上查看日志信息,看到已成功进入keepalived备机模式

May 4 14:12:37 web86 Keepalived_vrrp[31009]: Using LinkWatch kernel netlinkreflector...

May 4 14:12:37 web86 Keepalived_vrrp[31009]: VRRP_Instance(VI_1) Entering BACKUP STATE

May 4 14:12:37 web86 Keepalived_vrrp[31009]: VRRP sockpool: [ifindex(2),proto(112), unicast(0), fd(10,11)]

May 4 14:12:37 web86 Keepalived_healthcheckers[31007]: Opening file'/etc/keepalived/keepalived.conf'.

May 4 14:12:37 web86 Keepalived_healthcheckers[31007]: Configuration isusing : 14713 Bytes

May 4 14:12:37 web86 Keepalived_healthcheckers[31007]: Using LinkWatchkernel netlink reflector...

May 4 14:12:37 web86 Keepalived_healthcheckers[31007]: Activatinghealthchecker for service [192.168.214.87]:80

May 4 14:12:37 web86 Keepalived_healthcheckers[31007]: Activatinghealthchecker for service [192.168.214.88]:80

后端真实机启动脚本后,查看网卡信息,看到vip已成功绑定在回环口上。

[[email protected] ~]# ipaddr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu65536 qdisc noqueue state UNKNOWN

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

inet 192.168.214.89/32 brd 192.168.214.89 scope global lo:0

inet6 ::1/128 scope host

valid_lft forever preferred_lft forever

2: eth0:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen1000

link/ether 00:0c:29:38:31:ad brd ff:ff:ff:ff:ff:ff

inet 192.168.214.87/24 brd 192.168.214.255 scope global eth0

inet6 fe80::20c:29ff:fe38:31ad/64 scope link

valid_lft forever preferred_lft forever

通过ipvsadm –L –n查看相应lvs连接信息
[[email protected] ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.214.89:80 rr persistent 10
  -> 192.168.214.87:80            Route   100    2          2   
  -> 192.168.214.88:80            Route   100    0          0

七、keepalived测试

使用vip地址192.168.214.89访问后端192.168.214.87和192.168.214.88的页面

正常访问没问题后,我们来模拟lvs集群故障

首先把keepalived master主机214.85宕机,看备机能否接管过来,vip地址是否会漂移过来

在214.86上查看日志,发现备机成功切换成了主机状态,vip地址成功漂移了过来

May 4 14:35:34 web86 Keepalived_vrrp[31009]: VRRP_Instance(VI_1) Transition to MASTER STATE

May 4 14:35:34 web86 Keepalived_vrrp[31009]: VRRP_Group(VG1) Syncinginstances to MASTER state

May 4 14:35:35 web86 Keepalived_vrrp[31009]: VRRP_Instance(VI_1) EnteringMASTER STATE

May 4 14:35:35 web86 Keepalived_vrrp[31009]: VRRP_Instance(VI_1) settingprotocol VIPs.

May 4 14:35:35 web86 Keepalived_vrrp[31009]: VRRP_Instance(VI_1) Sendinggratuitous ARPs on eth0 for 192.168.214.89

May 4 14:35:35 web86 Keepalived_healthcheckers[31007]: Netlink reflectorreports IP 192.168.214.89 added

May 4 14:35:36 web86 ntpd[1230]: Listen normally on 7 eth0 192.168.214.89UDP 123

May 4 14:35:36 web86 ntpd[1230]: peers refreshed

May 4 14:35:40 web86 Keepalived_vrrp[31009]: VRRP_Instance(VI_1) Sendinggratuitous ARPs on eth0 for 192.168.214.89

然后,再把214.85主机恢复,由于214.85拥有较高的优先级,会从214.86抢回MASTER状态,相应的214.86会回归到原来的Backup状态

214.85日志记录,重新回到了MASTER状态

May 4 14:41:55 test85 Keepalived_vrrp[8066]: VRRP_Instance(VI_1) Transitionto MASTER STATE

May 4 14:41:55 test85 Keepalived_vrrp[8066]: VRRP_Instance(VI_1) Receivedlower prio advert, forcing new election

May 4 14:41:55 test85 Keepalived_vrrp[8066]: VRRP_Group(VG1) Syncinginstances to MASTER state

May 4 14:41:56 test85 Keepalived_vrrp[8066]: VRRP_Instance(VI_1) EnteringMASTER STATE

May 4 14:41:56 test85 Keepalived_vrrp[8066]: VRRP_Instance(VI_1) settingprotocol VIPs.

May 4 14:41:56 test85 Keepalived_vrrp[8066]: VRRP_Instance(VI_1) Sendinggratuitous ARPs on eth0 for 192.168.214.89

May 4 14:41:56 test85 Keepalived_healthcheckers[8064]: Netlink reflectorreports IP 192.168.214.89 added

May 4 14:41:58 test85 ntpd[1148]: Listen normally on 8 eth0 192.168.214.89UDP 123

May 4 14:41:58 test85 ntpd[1148]: peers refreshed

May 4 14:42:01 test85 Keepalived_vrrp[8066]: VRRP_Instance(VI_1) Sendinggratuitous ARPs on eth0 for 192.168.214.89

218.86日志记录,接收到了高优先级请求,从之前的MASTER状态变回了BACKUP状态

May 4 14:35:34 web86 Keepalived_vrrp[31009]: VRRP_Group(VG1) Syncinginstances to MASTER state

May 4 14:35:35 web86 Keepalived_vrrp[31009]: VRRP_Instance(VI_1) Entering MASTER STATE

May 4 14:35:35 web86 Keepalived_vrrp[31009]: VRRP_Instance(VI_1) settingprotocol VIPs.

May 4 14:35:35 web86 Keepalived_vrrp[31009]: VRRP_Instance(VI_1) Sendinggratuitous ARPs on eth0 for 192.168.214.89

May 4 14:35:35 web86 Keepalived_healthcheckers[31007]: Netlink reflectorreports IP 192.168.214.89 added

May 4 14:35:36 web86 ntpd[1230]: Listen normally on 7 eth0 192.168.214.89UDP 123

May 4 14:35:36 web86 ntpd[1230]: peers refreshed

May 4 14:35:40 web86 Keepalived_vrrp[31009]: VRRP_Instance(VI_1) Sendinggratuitous ARPs on eth0 for 192.168.214.89

May 4 14:36:41 web86 root[30963] 192.168.5.80 53824 192.168.214.86 22:#1525415801

May 4 14:36:41 web86 root[30963] 192.168.5.80 53824 192.168.214.86 22: ipaddr

May 4 14:41:55 web86 Keepalived_vrrp[31009]: VRRP_Instance(VI_1) Received higher prio advert

May 4 14:41:55 web86 Keepalived_vrrp[31009]: VRRP_Instance(VI_1) Entering BACKUP STATE

May 4 14:41:55 web86 Keepalived_vrrp[31009]: VRRP_Instance(VI_1) removingprotocol VIPs.

May 4 14:41:55 web86 Keepalived_vrrp[31009]: VRRP_Group(VG1) Syncinginstances to BACKUP state

May 4 14:41:55 web86 Keepalived_healthcheckers[31007]: Netlink reflectorreports IP 192.168.214.89 removed

May 4
14:41:56 web86 ntpd[1230]: Deleting interface #7
eth0,192.168.214.89#123, interface stats: received=0, sent=0,
dropped=0,active_time=380 secs

最后,再模拟后端真实机214.87服务宕掉,看是否vip只请求214.88

通过日志查看得知,keepalived集群探测到后端真实机214.87的80端口不通,把它从vip请求列表中移除了

May  414:48:00 test85 Keepalived_healthcheckers[8064]: TCP connection to[192.168.214.87]:80 failed !!!

May 4 14:48:00 test85Keepalived_healthcheckers[8064]: Removing service [192.168.214.87]:80 from VS[192.168.214.89]:80

当重新探测到后端真实机214.87服务恢复后,又把它加入了请求列表中

May 4 14:52:55 test85 Keepalived_healthcheckers[8064]: TCP connection to[192.168.214.87]:80 success.

May 4 14:52:55 test85 Keepalived_healthcheckers[8064]: Adding service[192.168.214.87]:80 to VS [192.168.214.89]:80

如果想了解更多,请关注我们的公众号
公众号ID:opdevos

扫码关注

原文地址:http://blog.51cto.com/5ydycm/2118107

时间: 2024-07-30 18:38:47

LVS/DR + keepalived负载均衡高可用实现的相关文章

HAproxy+Keepalived负载均衡-高可用web站

haproxy+keepalived负载均衡高可用web站   OS IP 子网掩码 路由网关 Centos6.6 HAproxy Keepalived Eth0:192.168.26.210 255.255.252.0 192.168.25.3 VIP:192.168.27.210 Centos6.6 HAporxy Keepalived Eth0:192.168.26.211 255.255.252.0 192.168.25.3 VIP:192.168.27.210 Centos6.6(WE

Nginx+Keepalived负载均衡高可用(双机热备)

Nginx+Keepalived负载均衡高可用(双机热备) 1.1 Nginx安装及配置 1.2 Keepalived安装及配置 1.3 WebServer安装 1.4 测试Nginx+Keepalived 环境如下: CentOS 6.4_64K eepalived-1.2.12 Nginx-1.4.4 vip:192.168.10.50 master:192.168.10.11 backup:192.168.10.12 webserver1:192.168.10.13 webserver2:

RHEL 5.4下部署LVS(DR)+keepalived实现高性能高可用负载均衡

原文地址:http://www.cnblogs.com/mchina/archive/2012/05/23/2514728.html 一.简介 LVS是Linux Virtual Server的简写,意即Linux虚拟服务器,是一个虚拟的服务器集群系统.本项目在1998年5月由章文嵩博士成立,是中国国内最早出现的自由软件项目之一. 目前有三种IP负载均衡技术(VS/NAT.VS/TUN和VS/DR):十种调度算法(rr|wrr|lc|wlc|lblc|lblcr|dh|sh|sed|nq). K

LVS+Keepalived负载均衡高可用如此简单?

今天简单写下lvs+keepalive实现负载均衡和高可用的功能,仅供参考!关于它们的详细介绍这里就不描述了,大家可以自行搜索下! lvs+keepalived网络拓扑图: 一.准备一个vip和4台虚拟机: vip:192.168.1.100 向外提供服务的ip,一般是绑定域名的 192.168.1.10 LB1_master调度器IP 192.168.1.13  LB2_slave调度器IP 192.168.1.11 nginx服务器IP 192.168.1.12  apache服务器IP 二

实现基于Haproxy_NAT+Keepalived负载均衡高可用架构

实验思路: 1.做这个实验首先可以想象一个场景,用户访问webserver的时候首先会经过调度器,首先需要明白的一点就是一般公司一般是在内网,客户端是通过外网访问webserver的. 2.haproxy是一个负载均衡器,Keepalived通过VRRP功能能再结合LVS负载均衡软件即可部署一个高性能的负载均衡集群系统,也就是说haproxy是解决后端realserver负载均衡的问题,keepalived是解决调度器的高可用的问题. 3.haproxy检测到后端服务器处于不健康的状态的时候会把

20.keepalived负载均衡--高可用

1.1keepalived服务概念说明 keepalived软件能干什么? Keepalived软件起初是专为LVS负载均衡软件设计的, 用来管理并监控LVS集群系统中各个服务节点的状态,后来又加入了可以实现高可用的VRRP功能 Keepalived软件主要是通过VRRP协议实现高可用功能的. VRRP是Virtual Router Redundancy Protocol(虚拟路由器冗余协议)的缩写, VRRP出现的目的就是为了解决静态路由单点故障问题的,它能够保证当个别节点宕机时, 整个网络可

基于云端虚拟机的LVS/DR+Keepalived+nginx的高可用集群架构配置

最近,公司要我部署一个集群架构,选来选取还是选择了大家都很熟悉的基于DR的LVS+Keepalived做负载分发,然后使用轻量级的nginx做中间代理层,这里呢先暂时实现一个简单web应用功能,对于代理功能和后续的web层部署.数据层安排将择机更新! 首先看一下集群框架:   下面开始我们的集群搭建行程: 一.IP规划: 做一件事情需要想想怎么去做既简单又有条理,我们做项目也应该如此.所以这里一定先做一个简单的规划,对于你后续测试的时候出现不能分发的情况有所解决. 负载均衡层          

实现基于Haproxy+Keepalived负载均衡高可用架构

一:环境准备 centos系统服务器4台,两台用于做haproxy主从架构, 两台作为后端server,服务器配置好yum源,防火墙关闭, 关闭selinux,各节点时钟服务同步,各节点之间可以通过主机名互相通信. 二:安装步骤 1.iptables –F &&setenforing 清空防火墙策略,关闭selinux. 2.拿两台服务器都使用yum方式安haproxy,keepalived 服务 3.后端服务器配置好基于LNMP架构的web服务 当准备工作做好之后,就可以修改配置文件啦,

lvs+heartbeat搭建负载均衡高可用集群

[172.25.48.1]vm1.example.com [172.25.48.4]vm4.example.com 集群依赖软件: 1.安装heartbeat集群软件 2.生成heartbeat配置文件. 3.配置heartbeat配置文件.