LVS+Keepalive

一、常用的负载均衡软件:

Nginx  应用层负载

LVS      网络层负载

HAProxy  应用层负载

常用的负载均衡硬件:

F5 、Netscale

二、LVS的四种工作模式

1)VS/NAT模式(Network address translation)

通过NAT转换表进行负载,收包和回包都需要查表

2)VS/TUN模式(tunneling)

给数据包打上IP头

3)DR模式(Direct routing)

给数据包打上MAC头

4)fulnat

双重NAT转换

三、LVS的配置(NAT模式)

三台主机,一台作为负载转发(dir),两台作为业务(rs)

hostname dir

loginout

hostname rs1

loginout

hostname rs2

loginout

===============dir配置

yum install ipvsadm -y
#dir上安装ipvsadm包
vim /usr/local/sbin/lvs_nat.sh
#! /bin/bash# director 服务器上开启路由转发功能: echo 1 > /proc/sys/net/ipv4/ip_forward # 关闭icmp的重定向echo 0 > /proc/sys/net/ipv4/conf/all/send_redirectsecho 0 > /proc/sys/net/ipv4/conf/default/send_redirectsecho 0 > /proc/sys/net/ipv4/conf/eth0/send_redirectsecho 0 > /proc/sys/net/ipv4/conf/eth1/send_redirects
# director 设置nat防火墙
iptables -t nat -F
iptables -t nat -X
iptables -t nat -A POSTROUTING -s 192.168.2.0/24  -j MASQUERADE   #设置内网网段
# director设置ipvsadm
IPVSADM=‘/sbin/ipvsadm‘
$IPVSADM -C
$IPVSADM -A -t 192.168.1.200:80 -s rr  
$IPVSADM -a -t 192.168.1.200:80 -r 192.168.2.1:80 -m        
$IPVSADM -a -t 192.168.1.200:80 -r 192.168.2.2:80 -m 
/bin/bash /usr/local/sbin/lvs_nat.sh
#执行脚本
ipvsadm -ln   
#查看虚拟转发表
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.1.200:80 rr
  -> 192.168.2.1:80               Masq    1      0          0         
  -> 192.168.2.2:80               Masq    1      0          0

====================rs配置

yum install nginx -y
#rs服务器上都安装nginx作为测试
echo "111master" > /usr/share/nginx/html/index.html 
yum install nginx -y
echo "222slave" > /usr/share/nginx/html/index.html 
[[email protected] ~]# curl 192.168.1.200:80
111master
[[email protected] ~]# curl 192.168.1.200:80
222slave
[[email protected] ~]# curl 192.168.1.200:80
111master
[[email protected] ~]# curl 192.168.1.200:80
222slave

测试成功

四、LVS的配置(DR模式)

ipvsadm -C   
ipvsadm -ln
iptables -t nat -F
#清空规则
rs网关不指向dir,三台主机在同一网段,比较浪费公网IP,四个IP
vim /usr/local/sbin/lvs_dr.sh
#! /bin/bash
echo 1 > /proc/sys/net/ipv4/ip_forward
ipv=/sbin/ipvsadm
vip=192.168.1.205
rs1=192.168.1.201
rs2=192.168.1.202
ifconfig eth0:0 $vip broadcast $vip netmask 255.255.255.255 up
route add -host $vip dev eth0:0
$ipv -C
$ipv -A -t $vip:80 -s rr 
$ipv -a -t $vip:80 -r $rs1:80 -g -w 1
$ipv -a -t $vip:80 -r $rs2:80 -g -w 1
/bin/bash /usr/local/sbin/lvs_dr.sh            #执行脚本
ipvsadm -ln         #查看规则

====================两台上rs配置

#! /bin/bash
vip=192.168.1.205
ifconfig lo:0 $vip broadcast $vip netmask 255.255.255.255 up
route add -host $vip lo:0
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
/bin/bash  /usr/local/sbin/lvs_dr_rs.sh

测试:最好再开一台Linux,浏览器有缓存

五、LVS+keepalived

两台作为keepalived,一主一从,dir和rs2做主从keepalive
[[email protected] ~]# ipvsadm -C 
#清空规则
yum install -y keepalived ipvsadm 
#dir和rs2安装
cp  /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak 
>  /etc/keepalived/keepalived.conf
vim  /etc/keepalived/keepalived.conf
#dir上编译配置文件
vrrp_instance VI_1 {
    state MASTER   #备用服务器上为 BACKUP
    interface eth0
    virtual_router_id 51
    priority 100  #备用服务器上为90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.1.205
    }
}
virtual_server 192.168.1.205 80 {
    delay_loop 6                  #(每隔10秒查询realserver状态)
    lb_algo wlc                  #(lvs 算法)
    lb_kind DR                  #(Direct Route)
    persistence_timeout 60        #(同一IP的连接60秒内被分配到同一台realserver)
    protocol TCP                #(用TCP协议检查realserver状态)
    real_server 192.168.1.201 80 {
        weight 100               #(权重)
        TCP_CHECK {
        connect_timeout 10       #(10秒无响应超时)
        nb_get_retry 3
        delay_before_retry 3
        connect_port 80
        }
    }
real_server 192.168.1.202 80 {
        weight 100
        TCP_CHECK {
        connect_timeout 10
        nb_get_retry 3
        delay_before_retry 3
        connect_port 80
        }
     }
}
/etc/init.d/keepalived start                       #启动
正在启动 keepalived:                                      [确定]
ip add                                                       #查看虚拟IP是否启动
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:e2:dc:da brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.200/24 brd 192.168.1.255 scope global eth0
    inet 192.168.1.205/32 scope global eth0
    inet6 fe80::20c:29ff:fee2:dcda/64 scope link 
       valid_lft forever preferred_lft forever

===================从keeplived配置

cp  /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak 
>  /etc/keepalived/keepalived.conf
vim  /etc/keepalived/keepalived.conf
vrrp_instance VI_1 {
    state BACKUP   #备用服务器上为 BACKUP
    interface eth0
    virtual_router_id 51
    priority 90  #备用服务器上为90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.1.205
    }
}
virtual_server 192.168.1.205 80 {
    delay_loop 6                  #(每隔10秒查询realserver状态)
    lb_algo wlc                  #(lvs 算法)
    lb_kind DR                  #(Direct Route)
    persistence_timeout 60        #(同一IP的连接60秒内被分配到同一台realserver)
    protocol TCP                #(用TCP协议检查realserver状态)
    real_server 192.168.1.201 80 {
        weight 100               #(权重)
        TCP_CHECK {
        connect_timeout 10       #(10秒无响应超时)
        nb_get_retry 3
        delay_before_retry 3
        connect_port 80
        }
    }
real_server 192.168.1.202 80 {
        weight 100
        TCP_CHECK {
        connect_timeout 10
        nb_get_retry 3
        delay_before_retry 3
        connect_port 80
        }
     }
}
/etc/init.d/keepalived start         
/etc/init.d/ipvsadm start
=====================
启动两台rs的Nginx服务,若下面规则缺少,查看Iptables是否关闭
[[email protected] ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.1.205:80 wlc persistent 60
  -> 192.168.1.201:80             Route   100    0          0         
  -> 192.168.1.202:80             Route   100    0          0

成功


宕机测试:

关闭rs1的业务网卡

[[email protected] ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.1.205:80 wlc persistent 60
  -> 192.168.1.202:80             Route   100    0          0         
再开启
[[email protected] ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.1.205:80 wlc persistent 60
  -> 192.168.1.201:80             Route   100    0          0         
  -> 192.168.1.202:80             Route   100    0          0

keeplived高可用测试

/etc/init.d/keepalived stop       #关闭主
 [[email protected] ~]# ipvsadm -ln              
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.1.205:80 wlc persistent 60
  -> 192.168.1.201:80             Route   100    0          0         
  -> 192.168.1.202:80             Local   100    0          0

成功

时间: 2024-10-24 11:07:29

LVS+Keepalive的相关文章

centos6.4安装lvs+keepalive

环境说明: [[email protected] html]# uname -a Linux db 2.6.32-358.el6.x86_64 #1 SMP Fri Feb 22 00:31:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux [[email protected] html]# cat /etc/redhat-release  CentOS release 6.4 (Final) lvs安装部署: 1.查看内核模块是否支持 lsmod | gre

CentOS 6.5安装lvs+keepalive负载均衡+故障转移nginx

环境 192.168.1.219为keepalived和lvs的VIP地址 192.168.1.222为keepalived的主并安装ipvsadm 192.168.1.221为keepalived的从并安装ipvsadm 192.168.1.218为nginx web服务器 192.168.1.220为nginx web服务器 在192.168.1.222下载keepalived和ipvsadm mkdir /root/repo cd /root/repo wget http://www.li

lvs+keepalive+nginx

lvs+keepalive+nginx 为了保证负载层足够稳定的状态下,适应更大的访问吞吐量还要应付可能的访问洪峰,我们加入了LVS技术.LVS负责第一层负载,然后再将访问请求转发到后端的若干台Nginx上.LVS的DR工作模式,只是将请求转到后端,后端的Nginx服务器必须有一个外网IP,在收到请求并处理完成后,Nginx将直接发送结果到请求方,不会再经LVS回发. 这里要注意的是: 有了上层的LVS的支撑Nginx就不再需要使用Keepalived作为热备方案.因为首先Nginx不再是单个节

Centos7搭建lvs+keepalive负载均衡集群

keepalived简介 keepalived是分布式部署解决系统高可用的软件,结合lvs(LinuxVirtual Server)使用,解决单机宕机的问题. keepalived是一个基于VRRP协议来实现IPVS的高可用的解决方案.对于LVS负载均衡来说,如果前端的调度器direct发生故障,则后端的realserver是无法接受请求并响应的.因此,保证前端direct的高可用性是非常关键的,否则后端的服务器是无法进行服务的.而我们的keepalived就可以用来解决单点故障(如LVS的前端

lvs+keepalive高可用web应用部署

环境: 192.168.92.183 7.3.1611 192.168.92.184 7.3.1611 192.168.92.185 7.3.1611 一.系统标准化 1.安装必要软件 yum -y install wget vim lrzsz unzip 2.下载标准版包 $ cd /usr/local/src $ wget http://182.138.101.48:51280/package/tar/nn_sys_init-20180605.tar.gz 3.修改config.ini配置文

heartbeat+lvs+Keepalive

1.heartbeat yum install heartbeat-* -y cd /usr/share/doc/heartbeat-3.0.4/ cp ha.cf authkeys haresources /etc/ha.d/ cd /etc/ha.d vim ha.cf vim authkeys vim haresources chmod 600 authkeys scp ha.cf authkeys haresources server1:/etc/ha.d /etc/init.d/hea

lvs+keepalive笔记

拓扑 keealive_master    192.168.12.145 keepalive_slave    192.168.12.146 lvs_vip            192.168.12.147 realserver1        192.168.12.148 realserver2        192.168.12.149 下载 keepalived-1.2.15.tar.gz yum install -y ipvsadm kernel-headers kerbel-deve

L10.1 LVS:keepalive实现ipvs高可用

使用LVS DR模式进行负载均衡,keepalive做高可用. 步骤: 1,配置RS. 2,ipvsadm配置node1,node2 为DR,测试分别做为DR是成功的. 3,配置keepalived实现ipvsadm的功能. 4, 测试. 实验拓扑: 1,配置RS node3,node4,为RS node3,node4执行脚本 vim lvs_dr_rs.sh #!/bin/bash # vip="192.168.0.50" interface="lo:0" cas

用lvs+keepalive构建高可用的后端nginx+tomcat

nginx和tomcat实现动静分离,加快访问网站的速度. 工作流程就是     lvs---->keepalive---->nginx负载均衡---->tomcat 准备四台服务器: 1.   web1 :192.168.4.10 2.   web2 :192.168.4.11 3.   keep_master :192.168.4.20 4.   kepp_slave :192.168.4.21 web1和web2上: (web2一样) nginx安装过程省略... 1.  配置ng