这两天学习了LVS+Keepalived负载均衡的搭建。网上的教程非常多,可是动起手来遇到不少问题。
如今把自己的搭建过程以及遇到的一些问题给分享下。
硬件环境:
Macbook 8G内存。250G SSD,双核
软件环境:
因为资源有限。搭建了4个虚拟机。
虚拟机
[[email protected] work]# uname -a
Linux rs-1 2.6.18-238.el5 #1 SMP Thu Jan 13 15:51:15 EST 2011 x86_64 x86_64 x86_64 GNU/Linux
[[email protected] work]# cat /etc/redhat-release
CentOS release 5.6 (Final)
4个虚拟机的ip地址分配例如以下:
Master DR: { ip: 172.16.3.89 hostname: lvs-backup}
Slave DR: { ip:172.16.3.90 hostname:lvs}
Real Server1: {ip: 172.16.3.91 hostname: rs-1}
Real Server2: { ip:172.16.3.92 hostname: rs-2}
VIP: 172.16.3.199
1.在Master DR和Slave DR分别安装ipvsadm(1.24), keepalived(1.2.12)
安装ipvsadm
检查系统是否安装了IPVS模块,下图显示系统已经支持ipvs模块的。
[[email protected] ~]# modprobe -l | grep ipvs
/lib/modules/2.6.18-238.el5/kernel/net/ipv4/ipvs/ip_vs.ko
/lib/modules/2.6.18-238.el5/kernel/net/ipv4/ipvs/ip_vs_dh.ko
/lib/modules/2.6.18-238.el5/kernel/net/ipv4/ipvs/ip_vs_ftp.ko
/lib/modules/2.6.18-238.el5/kernel/net/ipv4/ipvs/ip_vs_lblc.ko
/lib/modules/2.6.18-238.el5/kernel/net/ipv4/ipvs/ip_vs_lblcr.ko
/lib/modules/2.6.18-238.el5/kernel/net/ipv4/ipvs/ip_vs_lc.ko
/lib/modules/2.6.18-238.el5/kernel/net/ipv4/ipvs/ip_vs_nq.ko
/lib/modules/2.6.18-238.el5/kernel/net/ipv4/ipvs/ip_vs_rr.ko
/lib/modules/2.6.18-238.el5/kernel/net/ipv4/ipvs/ip_vs_sed.ko
/lib/modules/2.6.18-238.el5/kernel/net/ipv4/ipvs/ip_vs_sh.ko
/lib/modules/2.6.18-238.el5/kernel/net/ipv4/ipvs/ip_vs_wlc.ko
/lib/modules/2.6.18-238.el5/kernel/net/ipv4/ipvs/ip_vs_wrr.ko
做个软连接
[[email protected] src]$ sudo ln -s /usr/src/kernels/2.6.18-238.el5-x86_64/ /usr/src/linux
编译
[[email protected] ipvsadm-1.24]$ make
安装
[[email protected] ipvsadm-1.24]$ sudo make install
检測是否成功安装
[[email protected] ~]# ipvsadm -v
ipvsadm v1.24 2005/12/10 (compiled with getopt_long and IPVS v1.2.1)
能打印出版本说明已经成功安装了!!
!
!
安装keepalived
configure
[[email protected] keepalived-1.2.12]$./configure --sysconf=/etc --with-kernel-dir=/usr/src/kernels/2.6.18-238.el5-x86_64/
编译
[[email protected] keepalived-1.2.12]$ make
安装
[[email protected] keepalived-1.2.12]$ sudo make install
做个软连接
[[email protected] keepalived-1.2.12]$ sudo ln -s /usr/local/sbin/keepalived /sbin/
检測是否成功安装
[[email protected] ~]# keepalived -v
Keepalived v1.2.12 (05/06,2014)
能打印出版本。说明安装已经成功了!!
同理在lvs-backup上安装keepalived
检測是否成功安装
[[email protected]~]# keepalived -v
Keepalived v1.2.12 (05/06,2014)
配置keepalived
! Configuration File for keepalived
#global_defs {
# notification_email {
#设置报警邮件地址,能够设置多个,每行1个。
#需开启邮件报警及本机的Sendmail服务。
# }
#notification_email_from [email protected]
#smtp_server 192.168.199.1 #设置SMTP Server地址;
#smtp_connect_timeout 30
#router_id LVS_DEVEL
#}
########VRRP Instance########
vrrp_instance VI_1 {
state MASTER #指定Keepalived的角色,MASTER为主机server。BACKUP为备用server
interface eth1 #BACKUP为备用server
virtual_router_id 51
priority 100 #定义优先级。数字越大,优先级越高。主DR必须大于备用DR。
advert_int 1
authentication {
auth_type PASS #设置验证类型。主要有PASS和AH两种
auth_pass 1111 #设置验证password
}
virtual_ipaddress {
172.16.3.199 #设置主DR的虚拟IP地址(virtual IP),可多设。但必须每行1个
}
}
########Virtual Server########
virtual_server 172.16.3.199 80 { #注意IP地址与port号之间用空格隔开
delay_loop 6 #设置健康检查时间。单位是秒
lb_algo rr #设置负载调度算法,默觉得rr,即轮询算法,最棒是wlc算法
lb_kind DR #设置LVS实现LB机制。有NAT、TUNN和DR三个模式可选
nat_mask 255.255.255.0
#persistence_timeout 50 #会话保持时间,单位为秒
protocol TCP #指定转发协议类型。有TCP和UDP两种
real_server 172.16.3.92 80 {
weight 50 #配置节点权值,数字越大权值越高
TCP_CHECK {
connect_timeout 3 #表示3秒无响应,则超时
nb_get_retry 3 #表示重试次数
delay_before_retry 3 #表示重试间隔
}
}
real_server 172.16.3.91 80 { #配置服务器节点。即Real Server2的public IP
weight 50 #配置节点权值。数字越大权值越高
TCP_CHECK {
connect_timeout 3 #表示3秒无响应,则超时
nb_get_retry 3 #表示重试次数
delay_before_retry 3 #表示重试间隔
}
}
Slave DR的配置和Master DR的配置基本一样。仅仅有2个不同点:
MASTER改为BACKUP,priority 100改为priority 80
这边说下persistence_timeout 选项,意思就是在这个一定时间内会讲来自同一用户(依据ip来推断的)route到同一个real
server。
我这边给凝视掉了。详细依据业务需求。长连接的话最好是配置上,配置值最好跟lvs的配置的timeout一致。
启动keepalived
编写start.sh(stop.sh,restart.sh)脚本方便启动
#!/bin/sh
/etc/init.d/keepalived start
运行脚本
[[email protected] work]# ./start.sh
Starting keepalived: [ OK ]
编写检測脚本watch.sh
#!/bin/sh
watch ‘ipvsadm -l -n‘
启动检測
[[email protected] work]# ./watch.sh
Every 2.0s: ipvsadm -l -n Tue May 6 12:49:52 2014
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.16.3.199:80 rr persistent 50
-> 172.16.3.91:80 Route 50 0 0
-> 172.16.3.92:80 Route 50 0 0
能够看到已经检測到172.16.3.91, 172.16.3.92两台server。
在Slave DR上做相同配置和脚本。
2.在Real Server1和Real Server2安装nginx
安装nginx过程省略。
安装完nginx之后。须要启动nginx。
配置 realserver.sh脚本
#!/bin/bash
SNS_VIP=172.16.3.199
/etc/rc.d/init.d/functions
case "$1" in
start)
ifconfig lo:0 $SNS_VIP netmask 255.255.255.255 broadcast $SNS_VIP
/sbin/route add -host $SNS_VIP dev lo:0
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
sysctl -p >/dev/null 2>&1
echo "RealServer Start OK"
;;
stop)
ifconfig lo:0 down
route del $SNS_VIP >/dev/null 2>&1
echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce
echo "RealServer Stoped"
;;
*)
echo "Usage: $0 {start|stop}"
exit 1
esac
~
启动脚本
[[email protected] work]# ./realserver.sh start
RealServer Start
运行ifconfig,能够看到做往常多了一段下图红框内的内容。
測试
在Slave DR上測试
[[email protected] conf]$ for((i=0;i<100;i++));do curl 172.16.3.199;done;
或者用webbench模拟并发请求
[[email protected] conf]$ webbench -c 10 -t 10 http://172.16.3.199/
在Master DR上运行watch.sh
Every 2.0s: ipvsadm -l -n Wed May 7 11:45:27 2014
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.16.3.199:80 rr
-> 172.16.3.91:80 Route 50 0 1763
-> 172.16.3.92:80 Route 50 0 1762
整个配置过程,记得关闭全部虚拟机的防火墙, 这点非常重要。!!
[[email protected] work]# service iptables stop
例如以下命令可查询是否已经关闭防火墙
[[email protected] work]# chkconfig --list | grep iptables
iptables 0:off1:off2:off3:off4:off5:off6:off
參考链接:
http://beyondhdf.blog.51cto.com/229452/1331874
http://www.it165.net/admin/html/201308/1604.html