搭建LVS负载均衡环境(keepalived+lvs+nginx)

LVS简介:

LVS集群有DR、TUN、NAT三种配置模式,可以对www服务、FTP服务、MAIL服务等做负载均衡,下面通过搭建www服务的负载均衡实例,讲述基于DR模式的LVS集群配置

Director-Server: LVS的核心服务器,作用类似于一个路由器,含有为LVS功能完成的路由表,通过路由表把用户的请求分发给服务器组层的应用服务器(Real_Server),同时监控Real-servers

,在Real-Server不可用时,将其从LVS路由表中剔除,再恢复时,重新加入。

Real-Server:由web服务器,mail服务器,FTP服务器,DNS服务器,视频服务器中的一个或多个,每个Real-Server通过LAN分布或WAN分布相连接。实际过程中DR,也可以同时兼任Real-server

LVS的三种负载均衡方式:

NAT:调度器将请求的目标地址和目标端口,改写成Real-server的地址和端口,然后发送到选定的Real-server上,Real-Server端将数据返回给用户时,需要再次经过DR将报文的源地址和源端口改成虚拟IP的地址和端口,然后把数据发送给用户,完成整个负载调度过程。

弊端:调度器负载大

TUN: IP隧道方式,调度器将请求通过IP隧道转发到Real-server,而Real-server直接响应用户的请求,不再经调度器。D与R可不同网络,TUN方式中,调度器将只处理用户的报文请求,提高吞吐量。

弊端:有IP隧道开销

DR:直接路由技术实现虚拟服务器,DR通过改写请求的MAC,将请求发送给Real-server,而Real-server直接响应给Client,免去了隧道开销。三种方式中,效果最好。

弊端:要求D与R同在一个物理网段

LVS的负载调度方式:

LVS是根据Real-Server的负载情况,动态的选择Real-server响应,IPVS实现了8种负载调度算法,这里讲述4种调度算法:

rr 轮叫调度:

平等分配R,不考虑负载。

wrr 加权轮叫调度:

设置高低权值,分配给R。

lc 最少连接调度:

动态分配给已建立连接少的R上。

wlc 加权最少连接调度:

动态的设置R的权值,在分配新连接请求时,尽可能使R的已建立连接和R的权值成正比。

环境介绍:

本例使用三台主机,一台Director-server(调度服务器),两台web real_server(web服务器)

DS的真实IP:10.2.16.250

VIP:10.2.16.252

RealServer——1 的真实IP: 10.2.16.253

RealServer——2 的真实IP: 10.2.16.254

注意:本例采用LVS的DR模式,使用rr轮询来做负载均衡

用keepalived方式安装配置LVS

1、安装keepalived

[[email protected] ~]# tar -zxvf keepalived-1.2.13.tar.gz -C ./

[[email protected] ~]# cd keepalived-1.2.13

[[email protected] keepalived-1.2.13]# ./configure --sysconf=/etc/ --with-kernel-dir=/usr/src/kernels/2.6.32-358.el6.x86_64/

[[email protected] keepalived-1.2.13]# make && make install

[[email protected] keepalived-1.2.13]# ln /usr/local/sbin/keepalived /sbin/

2、安装LVS

yum -y install ipvsadm*

开启路由转发功能:

[[email protected] ~]# vim /etc/sysctl.conf

net.ipv4.ip_forward = 1

[[email protected] ~]# sysctl -p

3、在调度服务器上配置keepalived和LVS

[[email protected] ~]# cat /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {

notification_email {

[email protected]

}

notification_email_from [email protected]

smtp_server 127.0.0.1

smtp_connect_timeout 30

router_id LVS_DEVEL

}

vrrp_instance VI_1 {

state MASTER

interface eth0 #LVS的真实物理网卡

virtual_router_id 51

priority 100

advert_int 1

authentication {

auth_type PASS

auth_pass 1111

}

virtual_ipaddress {   #LVS的VIP

10.2.16.252

}

}

virtual_server 10.2.16.252 80 {         #定义对外提供服务的LVS的VIP以及port

delay_loop 6 #设置运行情况检查时间,单位是s

lb_algo rr #设置负载调度算法,RR为轮询调度

lb_kind DR #设置LVS的负载均衡机制,NAT/TUN/DR 三种模式

nat_mask 255.255.255.0

#    persistence_timeout 50             #会话保持时间,单位是s,对动态网页的session共享有用

protocol TCP #指定转发协议类型

real_server 10.2.16.253 80 {         #指定realserver的真实IP和port

weight 1  #设置权值,数字越大分配的几率越高

TCP_CHECK {  #realserver的状态检测部分

connect_timeout 3 #表示3秒无响应超时

nb_get_retry 3  #重试次数

delay_before_retry 3 #重试间隔

}

}

real_server 10.2.16.254 80 {         #配置服务节点2

weight 1

TCP_CHECK {

connect_timeout 3

nb_get_retry 3

delay_before_retry 3

}

}

}

4、配置Real_Server

由于采用的是DR方式调度,Real_Server会以LVS的VIP来直接回复Client,所以需要在Real_Server的lo上开启LVS的VIP来与Client建立通信

1、此处写了一个脚本来实现VIP这项功能:

[[email protected] ~]# cat /etc/init.d/lvsrs

#!/bin/bash

#description : start Real Server

VIP=10.2.16.252

./etc/rc.d/init.d/functions

case "$1" in

start)

echo " Start LVS of Real Server "

/sbin/ifconfig lo:0 $VIP broadcast $VIP netmask 255.255.255.255 up

/sbin/route add -host $VIP dev lo:0

echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore                 #注释:这四句目的是为了关闭ARP广播响应,使VIP不能向网络内发送广播,以防止网络出现混乱

echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce

echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore

echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce

;;

stop)

/sbin/ifconfig lo:0 down

echo "close LVS Director server"

echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore

echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce

echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignore

echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce

;;

*)

echo "Usage: $0 {start|stop}"

exit 1

esac

2、启动脚本:

[[email protected] ~]# service lvsrs start

Start LVS of Real Server

3、查看lo:0虚拟网卡的IP:

[[email protected] ~]# ifconfig

eth0      Link encap:Ethernet  HWaddr 00:0C:29:A2:C4:9F

inet addr:10.2.16.253  Bcast:10.2.16.255  Mask:255.255.255.0

inet6 addr: fe80::20c:29ff:fea2:c49f/64 Scope:Link

UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

RX packets:365834 errors:0 dropped:0 overruns:0 frame:0

TX packets:43393 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:33998241 (32.4 MiB)  TX bytes:4007256 (3.8 MiB)

lo        Link encap:Local Loopback

inet addr:127.0.0.1  Mask:255.0.0.0

inet6 addr: ::1/128 Scope:Host

UP LOOPBACK RUNNING  MTU:16436  Metric:1

RX packets:17 errors:0 dropped:0 overruns:0 frame:0

TX packets:17 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

RX bytes:1482 (1.4 KiB)  TX bytes:1482 (1.4 KiB)

lo:0      Link encap:Local Loopback

inet addr:10.2.16.252  Mask:255.255.255.255

UP LOOPBACK RUNNING  MTU:16436  Metric:1

4、确保nginx访问正常

[[email protected] ~]# netstat -anptul

Active Internet connections (servers and established)

Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name

tcp        0      0 0.0.0.0:80                  0.0.0.0:*                   LISTEN      1024/nginx

5、在real_server2上,执行同样的4步操作。

6、开启DR上的keepalived:

[[email protected] ~]# service keepalived start

Starting keepalived:                                       [  OK  ]

查看keepalived启动日志是否正常:

[[email protected] ~]# tail -f /var/log/messeges

May 24 10:06:57 proxy Keepalived[2767]: Starting Keepalived v1.2.13 (05/24,2014)

May 24 10:06:57 proxy Keepalived[2768]: Starting Healthcheck child process, pid=2770

May 24 10:06:57 proxy Keepalived[2768]: Starting VRRP child process, pid=2771

May 24 10:06:57 proxy Keepalived_healthcheckers[2770]: Netlink reflector reports IP 10.2.16.250 added

May 24 10:06:57 proxy Keepalived_vrrp[2771]: Netlink reflector reports IP 10.2.16.250 added

May 24 10:06:57 proxy Keepalived_healthcheckers[2770]: Netlink reflector reports IP fe80::20c:29ff:fee6:ce1a added

May 24 10:06:57 proxy Keepalived_healthcheckers[2770]: Registering Kernel netlink reflector

May 24 10:06:57 proxy Keepalived_healthcheckers[2770]: Registering Kernel netlink command channel

May 24 10:06:57 proxy Keepalived_vrrp[2771]: Netlink reflector reports IP fe80::20c:29ff:fee6:ce1a added

May 24 10:06:57 proxy Keepalived_vrrp[2771]: Registering Kernel netlink reflector

May 24 10:06:57 proxy Keepalived_vrrp[2771]: Registering Kernel netlink command channel

May 24 10:06:57 proxy Keepalived_vrrp[2771]: Registering gratuitous ARP shared channel

May 24 10:06:57 proxy Keepalived_vrrp[2771]: Opening file ‘/etc/keepalived/keepalived.conf‘.

May 24 10:06:57 proxy Keepalived_vrrp[2771]: Configuration is using : 63303 Bytes

May 24 10:06:57 proxy Keepalived_vrrp[2771]: Using LinkWatch kernel netlink reflector...

May 24 10:06:57 proxy Keepalived_healthcheckers[2770]: Opening file ‘/etc/keepalived/keepalived.conf‘.

May 24 10:06:57 proxy Keepalived_healthcheckers[2770]: Configuration is using : 14558 Bytes

May 24 10:06:57 proxy Keepalived_vrrp[2771]: VRRP sockpool: [ifindex(2), proto(112), unicast(0), fd(10,11)]

May 24 10:06:57 proxy Keepalived_healthcheckers[2770]: Using LinkWatch kernel netlink reflector...

May 24 10:06:57 proxy Keepalived_healthcheckers[2770]: Activating healthchecker for service [10.2.16.253]:80

May 24 10:06:57 proxy Keepalived_healthcheckers[2770]: Activating healthchecker for service [10.2.16.254]:80

May 24 10:06:58 proxy Keepalived_vrrp[2771]: VRRP_Instance(VI_1) Transition to MASTER STATE

May 24 10:06:59 proxy Keepalived_vrrp[2771]: VRRP_Instance(VI_1) Entering MASTER STATE

May 24 10:06:59 proxy Keepalived_vrrp[2771]: VRRP_Instance(VI_1) setting protocol VIPs.

May 24 10:06:59 proxy Keepalived_vrrp[2771]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 10.2.16.252

May 24 10:06:59 proxy Keepalived_healthcheckers[2770]: Netlink reflector reports IP 10.2.16.252 added

May 24 10:07:04 proxy Keepalived_vrrp[2771]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 10.2.16.252

一切正常!

7、查看LVS的路由表:

[[email protected] ~]# ipvsadm -ln

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port           Forward Weight ActiveConn InActConn

TCP  10.2.16.252:80 rr

-> 10.2.16.253:80               Route   1      0          0

-> 10.2.16.254:80               Route   1      0          0

8、测试,打开网页,输入 http://10.2.16.252/

能正常出现两台负载均衡服务器的网页,则证明已经成功!

9、测试其中一台Real-Server服务挂掉

(1)把254的nginx进程杀掉,再开启。

(2)查看keepalived的日志:

[[email protected] ~]#tail -f /var/log/messeges

May 24 10:10:55 proxy Keepalived_healthcheckers[2770]: TCP connection to [10.2.16.254]:80 failed !!!

May 24 10:10:55 proxy Keepalived_healthcheckers[2770]: Removing service [10.2.16.254]:80 from VS [10.2.16.252]:80

May 24 10:10:55 proxy Keepalived_healthcheckers[2770]: Remote SMTP server [127.0.0.1]:25 connected.

May 24 10:10:55 proxy Keepalived_healthcheckers[2770]: SMTP alert successfully sent.

May 24 10:11:43 proxy Keepalived_healthcheckers[2770]: TCP connection to [10.2.16.254]:80 success.

May 24 10:11:43 proxy Keepalived_healthcheckers[2770]: Adding service [10.2.16.254]:80 to VS [10.2.16.252]:80

May 24 10:11:43 proxy Keepalived_healthcheckers[2770]: Remote SMTP server [127.0.0.1]:25 connected.

May 24 10:11:43 proxy Keepalived_healthcheckers[2770]: SMTP alert successfully sent.

可见keepalive的反应速度还是非常快的!

到此,LVS配置全部结束,圆满成功!

搭建LVS负载均衡环境(keepalived+lvs+nginx)

时间: 2024-08-08 19:08:28

搭建LVS负载均衡环境(keepalived+lvs+nginx)的相关文章

开发人员学Linux(5):CentOS7编译安装Nginx并搭建Tomcat负载均衡环境

1.前言在上一篇讲述了JMeter的使用,在本篇就可以应用得上了.本篇将讲述如何编译安装Nginx并利用前面的介绍搭建一个负载均衡测试环境.2.软件准备Nginx-1.12.0,下载地址:https://nginx.org/download/nginx-1.12.0.tar.gzTomcat8(本系列已介绍过如何下载和安装)JMeter(本系列已介绍过如何下载和使用)注:VirtualBox宿主机IP为"192.168.60.16,虚拟机IP为:192.168.60.198,虚拟机通过桥接方式接

Nginx配置upstream实现负载均衡及keepalived实现nginx高可用

(原文链接:http://www.studyshare.cn/blog-front//blog/details/1159/0 ) 一.准备工作 1.准备两个项目,发布到不同的服务器上,此处使用2个虚拟机发布了两个项目分别为:http://192.168.28.128:8081, http://192.168.28.129:8081 2.在两个虚拟机上都安装好nginx 二.配置upstream 1.在任意一台虚拟机上所安装的nginx的nginx.conf配置文件中配置upstream如下: 以

lvs 负载均衡环境搭建

记录一下搭建lvs环境的步骤 其中master 10.0.0.11,backup 10.0.0.15,realserver1 10.0.0.119,realserver210.0.0.11 1.mkdir /usr/local/src/lvs 2.cd /usr/local/src/lvs 3.上传keepalived-1.1.20.tar.gz ipvsadm-1.24.tar.gz到指定文件夹 4.lsmod |grep ip_vs 5.uname -r 结果:2.6.32-431.el6.

LVS负载均衡群集——keepalived+DR模式(实战!)

keepalived实现原理 keepalived采用vrrp热备份协议,实现Linux服务器的多机热备功能vrrp,虚拟路由冗余协议,是针对路由器的一种备份解决方案 keepalivd案列讲解 keepalived可实现多机热备,每个热备组有多台服务器,最常用的就是双机热备双机热备的故障切换是由虚拟IP地址的漂移来实现,适用于各种应用服务器 DR模式原理 实验环境 CentOS7系统: DR1 主: 192.168.100.2 DR2 备: 192.168.100.20 虚拟IP:192.16

《搭建DNS负载均衡服务》RHEL6

搭建DNS负载均衡环境: 1.至少三台的linux虚拟机,一台主的DNS服务器,1台副的(可以N台),1台测试机. 负载均衡有很多种,apache那样的是为了缓解人们访问网站时给服务器造成太大的压力,所以就是你访问网站时,服务器你一下,我一下,他一下,大家轮流着干. 2.DNS负载均衡原理:几台DNS服务器连在一起,就好比上下级的关系,他被干掉了,你顶上,你被干掉了,我顶上..反正总能保证服务不断. 建议:要是你的电脑是4G的内存,虚拟机每台内存都给400M,把图形化界面关了,否则会很卡,关了图

Nginx+Keepalived(双机热备)搭建高可用负载均衡环境(HA)

原文:https://my.oschina.net/xshuai/blog/917097 摘要: Nginx+Keepalived搭建高可用负载均衡环境(HA) http://blog.csdn.net/xyang81/article/details/52554398可以看更多介绍 Keepalived的介绍可以百度一堆一堆的资料.一定要看看哦. 1.基于上一篇博客总结,再次安装一个虚拟机当backup服务器,这个服务器只安装Keepalived+Nginx即可 2.Master还是上一篇博文的

通过keepalived搭建高可用的LVS负载均衡集群

一.keepalived软件简介 keepalived是基于vrrp协议实现高可用功能的一种软件,它可以解决单点故障的问题,通过keepalived搭建一个高可用的LVS负载均衡集群时,keepalived还能检测后台服务器的运行状态. 二.vrrp协议原理简介 vrrp(虚拟路由器冗余协议),是为了解决网络上静态路由出现的单点故障的问题,举个例子,如下图 主机A和B均在同一个局域网内,C和D均是该局域网的网关,即A和B想与外网通信,需指网关到C或D,那究竟指向C好还是指向D好呢?都不好!当指向

搭建LVS负载均衡集群

负载均衡可以用LVS方案,但是为了防止单点故障,可以选择lvs+keepalived组合保证高可用性 重点:每个节点都同步时间 ntpdate time.windows.com 1.环境简介 操作系统:Centos6.6 DR上装:ipvs管理工具:ipvsadm,keepalived realserver上装:nginx或者Apache 一.理论篇 1.lvs集群的组成 LVS服务器系统由三部分组成 1)负载均衡层: 位于整个系统的最前端,避免单点故障一般由2台或2台以上负载调度器组成 2)服

大数据高并发系统架构实战方案(LVS负载均衡、Nginx、共享存储、海量数据、队列缓存)

课程简介: 随着互联网的发展,高并发.大数据量的网站要求越来越高.而这些高要求都是基础的技术和细节组合而成的.本课程就从实际案例出发给大家原景重现高并发架构常用技术点及详细演练. 通过该课程的学习,普通的技术人员就可以快速搭建起千万级的高并发大数据网站平台. 亮点一:真实环境还原,课程采用了VM环境重现大网站集群服务器环境,真实环境还原再现. 亮点二:基础实用,细节决定成败,课程内容在演练过程中重点介绍各种细节,保证初级人员快速入门及高级进阶. 亮点三:讲师丰富的海量平台运作经验 讲师tom5多