LVS专题: LVS+Keepalived并使用DNS轮询实现Director的高可用和负载均衡
前言
LVS专题写到第三篇了, 前两篇我们对LVS的基础使用也有了一些了解, 这篇我们将做一个比较复杂的实验, 话不多说, 开始吧!
什么是KeepAlived
What is Keepalived ?
Keepalived is a routing software written in C. The main goal of this project is to provide simple and robust facilities for loadbalancing and high-availability to Linux system and Linux based infrastructures. Loadbalancing framework relies on well-known and widely used Linux Virtual Server (IPVS) kernel module providing Layer4 loadbalancing. Keepalived implements a set of checkers to dynamically and adaptively maintain and manage loadbalanced server pool according their health. On the other hand high-availability is achieved by VRRP protocol. VRRP is a fundamental brick for router failover. In addition, Keepalived implements a set of hooks to the VRRP finite state machine providing low-level and high-speed protocol interactions. Keepalived frameworks can be used independently or all together to provide resilient infrastructures. ##转自官方文档大体的意思就是
keepalived
是一个由C语言编写的项目, 主要目标是提供负载均衡和高可用的Linux服务.keepalived
依赖于Linux Virtual Server(IPVS)内核提供的四层负载均衡,keepalived
实现了动态自适应和维护, 能够检测负载均衡池中的主机的健康状态, 而keepalived
的高可用是通过VRRP(virtual route redundancy protocol)
实现的.关于
VRRP
协议参考文档H3C技术白皮书: VRRP、RFC 3768:Virtual Router Redundancy Protocol (VRRP)
实验介绍
大家都知道LVS虽然性能很强劲但是功能上有很多不足, 例如:
不能提供后端健康状态检查功能, director容易成为单点故障
…, 而这些功能我们都可以通过第三方软件keepalived
来提供, 而本次实验我们就要使用keepalived
提供lvs-director的高可用, 并让两台director分别互为主从都能接受客户端通过dns对A记录的轮询请求从而转发至后端主机. 实现Director的高可用和负载均衡
实验拓扑
图画的不够形象, 实验中我们使用DR模型来进行实验
实验环境
VIP1为172.16.1.8、VIP2为172.16.1.9
主机 | IP地址 | 功用 |
---|---|---|
director1.anyisalin.com | VIP1,VIP2, DIP: 172.16.1.2 | Director1 |
director2.anyisalin.com | VIP1,VIP2, DIP: 172.16.1.3 | Director2 |
rs1.anyisalin.com | VIP, RIP: 172.16.1.4 | RealServer 1 |
rs2.anyisalin.com | VIP, RIP: 172.16.1.5 | RealServer 2 |
ns.anyisalin.com | IP: 172.16.1.10 | DNS |
注意: 本文实验中所有主机SElinux和iptables都是关闭的
实验步骤
配置KeepAlived(1)
实现Director 的VIP互为主从
下面的操作都在director1上执行
[[email protected] ~]# ntpdate 0.centos.pool.ntp.org #同步时间 [[email protected] ~]# yum install keepalived &> /dev/null && echo success #安装keepalived success [[email protected] ~]# vim /etc/keepalived/keepalived.conf #修改配置文件的部分配置如下 vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 172.16.1.8 dev eth0 label eth0:0 } } vrrp_instance VI_2 { state BACKUP interface eth0 virtual_router_id 52 priority 99 advert_int 1 authentication { auth_type PASS auth_pass 2222 } virtual_ipaddress { 172.16.1.9 dev eth0 label eth0:1 } }
下面的操作都在director1上执行
[[email protected] ~]# ntpdate 0.centos.pool.ntp.org #同步时间 [[email protected] ~]# yum install keepalived &> /dev/null && echo success #安装keepalived success [[email protected] ~]# vim /etc/keepalived/keepalived.conf #修改配置文件的部分配置如下 vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 51 priority 99 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 172.16.1.8 dev eth0 label eth0:0 } } vrrp_instance VI_2 { state MASTER interface eth0 virtual_router_id 52 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 2222 } virtual_ipaddress { 172.16.1.9 dev eth0 label eth0:1 } }
同时在director1和director2上启动keepalived
[[email protected] ~]# service keepalived start [[email protected] ~]# service keepalived start
测试
默认情况
director1
和director2
的ip如下我们将
director1
的keepalived
服务停止, 效果如下, IP自动转移到director2
我们将
director1
的keepalived
服务再次启动, 效果如下, IP地址转回director1
配置LVS
配置KeepAlived(2)
这里我们使用DR模型进行实验, 因为
keepalived
可以通过调用ipvs的接口来自动生成规则, 所以我们这里无需ipvsadm, 但是我们要通过ipvsadm命令来查看一下ipvs规则
下面的操作在director1和director2都要执行
, 由于篇幅过长, 遂不演示director2的操作
[[email protected] ~]# yum install ipvsadm httpd &> /dev/null && echo success success [[email protected] ~]# echo "<h1>Sorry, Service is Unavailable </h1>" > /var/www/html/index.html #配置sorry页面 [[email protected] ~]# vim /etc/keepalived/keepalived.conf #修改keepalived配置文件, 添加以下段落 virtual_server 172.16.1.8 80 { delay_loop 6 lb_algo rr lb_kind DR nat_mask 255.255.255.0 # persistence_timeout 50 protocol TCP sorry_server 127.0.0.1 80 real_server 172.16.1.4 80 { weight 1 HTTP_GET { url { path / status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 172.16.1.5 80 { weight 1 HTTP_GET { url { path / status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } } virtual_server 172.16.1.9 80 { delay_loop 6 lb_algo rr lb_kind DR nat_mask 255.255.255.0 # persistence_timeout 50 protocol TCP sorry_server 127.0.0.1 80 real_server 172.16.1.4 80 { weight 1 HTTP_GET { url { path / status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 172.16.1.5 80 { weight 1 HTTP_GET { url { path / status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } }
同时在director1和director2上重启keepalived
[[email protected] ~]# service keepalived restart[[email protected] ~]# service keepalived restart
查看ipvs规则
[[email protected] ~]# ipvsadm -L -n #正常IP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 172.16.1.8:80 rr -> 172.16.1.4:80 Route 1 0 0 -> 172.16.1.5:80 Route 1 0 0 TCP 172.16.1.9:80 rr -> 172.16.1.4:80 Route 1 0 0 -> 172.16.1.5:80 Route 1 0 0
配置RS的IP和web服务
下面的操作都在rs1上执行
[[email protected] ~]# yum install httpd -y &> /dev/null && echo success #安装httpd success [[email protected] ~]# echo "<h1>This is 172.16.1.4</h1>" > /var/www/html/index.html #创建网页文件 [[email protected] ~]# service httpd start #启动httpd服务 Starting httpd: httpd: apr_sockaddr_info_get() failed for director1.anyisalin.com httpd: Could not reliably determine the server‘s fully qualified domain name, using 127.0.0.1 for ServerName [ OK ] [[email protected] ~]# vim setup.sh #编写脚本配置相关内核参数和IP, 对这里不了解的看我上篇文章 #!/bin/bash case $1 in start) echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce ifconfig lo:0 172.16.1.8/32 broadcast 172.16.1.8 up ifconfig lo:1 172.16.1.9/32 broadcast 172.16.1.9 up ;; stop) echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce ifconfig lo:0 down ifconfig lo:1 down esac [[email protected] ~]# bash setup.sh start #运行脚本 [[email protected] ~]# scp setup.sh 172.16.1.5:/root #将脚本传给rs2
下面的操作都在rs1上执行
[[email protected] ~]# yum install httpd -y &> /dev/null && echo success #安httpd success [[email protected] ~]# echo "<h1>This is 172.16.1.4</h1>" > /var/www/html/index.html #创建网页文件[[email protected] ~]# bash setup.sh start #运行脚本
测试LVS
测试
director1
和director2
当我们关闭
rs1
的web服务, 会自动检查健康状态并删除当我们同时关闭
rs1
和rs2
的web服务, 会自动启用sorry server
配置DNS
配置dns的过程没什么好说的, 有兴趣可以看我的博客DNS and BIND 配置指南
下面的操作都在ns上执行
[[email protected] /]# yum install bind bind-utils -y --nogpgcheck &> /dev/null && echo success #安装bind success [[email protected] /]# vim /etc/named.conf #修改主配置文件如下 options { directory "/var/named"; }; zone "." IN { type hint; file "named.ca"; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; [[email protected] /]# vim /etc/named.rfc1912.zones #在文件尾部加上下列字段 zone "anyisalin.com" IN { type master; file "anyisalin.com.zone"; }; [[email protected] /]# vim /var/named/anyisalin.com.zone #创建区域配置文件 $TTL 600 $ORIGIN anyisalin.com. @ IN SOA ns.anyisalin.com. admin.anyisalin.com. ( 20160409 1D 5M 1W 1D ) IN NS ns ns IN A 172.16.1.10 www IN A 172.16.1.8 www IN A 172.16.1.9 [[email protected] /]# service named start #启动named Generating /etc/rndc.key: [ OK ] Starting named: [ OK ]
测试DNS轮询效果
已经实现DNS轮询效果
最终测试
做了那么实验, 结合前面实验的效果, 来一次最终测试, 我将本机的DNS server指向了172.16.1.10以便测试
默认情况如下
我们将
director2
的keepalived
强制关闭,依然不会影响访问此时我们的
director1
的IP地址如下, 接管了director2
的IP
总结
我们通过DNS轮询实现LVS-Director的负载均衡, KeepAlived实现Director的高可用, 而Director本身就可以为后端的RS进行负载均衡, 这一套架构还是很完整的. 其实本文还有很多不完善的地方, 但是由于我时间较紧, 遂不对其进行叙述, 希望大家多多谅解, LVS专题到这里可能结束了, 也可能会不定期的更新, 希望大家多多关注我的博客!
作者: AnyISalIn QQ:1449472454
感谢:MageEdu