拓扑图:
环境准备:
Centos6.5x86_64 关闭防火墙和Selinux
node5: eth0:192.168.1.190/24 VIP:192.168.1.121/24
node1:eth1:192.168.1.191/24 VIP:192.168.1.122/24
node2:RIP:eth0: 192.168.19.2/24
node3:RIP:eth0: 192.168.19.3/24 所有节点网关/DNS都为:192.168.1.1
每个服务器的hosts文件
# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.19.190 node5.luo.com node5 192.168.19.191 node1.luo.com node1 192.168.19.192 node2.luo.com node2 192.168.19.193 node3.luo.com node3
时间同步(每个节点都需要做)
为方便实验使用笔记本作为时间服务器,参考http://blog.csdn.net/irlover/article/details/7314530
ntpdate 192.168.1.105
如我想每隔10分钟和win7同步时间可以使用crontab -e 编辑crontab工作内容如下:
*/10 * * * * /usr/sbin/ntpdate 192.168.1.105 &>/dev/null&&/sbin/hwclock-w
配置后端real server :node2与node3
node2与node3都执行下面的配置
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 1 > /proc/sys/net/ipv4/conf/eth0/arp_ignore echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce echo 2 > /proc/sys/net/ipv4/conf/eth0/arp_ignore ifconfig lo:0 192.168.1.121 netmask 255.255.255.255 broadcast 192.168.1.121 route add -host 192.168.1.121 dev lo:0 ifconfig lo:1 192.168.1.122 netmask 255.255.255.255 broadcast 192.168.1.122 route add -host 192.168.1.122 dev lo:1
分别为node2与node3安装httpd并写好测试页面
# yum install httpd
node2测试页面
# echo "<h1>node2.luo.com</h1>" > /var/www/html/index.html
# service httpd restart
# curl http://192.168.1.192
node3测试页面
# echo "<h1>node3.luo.com</h1>" > /var/www/html/index.html
# service httpd restart
# curl http://192.168.1.193
二、配置前端的Director
node5与node1安装配置基本相同
在两个节点上都安装keepalived
# yum install -y keepalived ipvsadm
node5的配置与node1的不同之处在与两个实例中的角色刚好相反,node5为MASTER时,node1为BACKUP,其相应的优先级也有所改变,其他配置相同。
node5配置文件
! Configuration File for keepalived global_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from Alexandre.Cassen@firewall.loc smtp_server localhost smtp_connect_timeout 30 router_id LVS_DEVEL } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 61 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 123456 } virtual_ipaddress { 192.168.1.121 } } vrrp_instance VI_2 { state BACKUP interface eth0 virtual_router_id 71 priority 99 advert_int 1 authentication { auth_type PASS auth_pass 123456 } virtual_ipaddress { 192.168.1.122 } } virtual_server 192.168.1.121 80 { delay_loop 6 lb_algo rr lb_kind DR nat_mask 255.255.255.0 protocol TCP real_server 192.168.1.192 80 { weight 1 HTTP_GET { url { path / status_code 200 } connect_timeout 2 nb_get_retry 3 delay_before_retry 1 } } real_server 192.168.1.193 80 { weight 1 HTTP_GET { url { path / status_code 200 } connect_timeout 2 nb_get_retry 3 delay_before_retry 1 } } } virtual_server 192.168.1.122 80 { delay_loop 6 lb_algo rr lb_kind DR nat_mask 255.255.255.0 protocol TCP real_server 192.168.1.192 80 { weight 1 HTTP_GET { url { path / status_code 200 } connect_timeout 2 nb_get_retry 3 delay_before_retry 1 } } real_server 192.168.1.193 80 { weight 1 HTTP_GET { url { path / status_code 200 } connect_timeout 2 nb_get_retry 3 delay_before_retry 1 } } }
node1配置文件
! Configuration File for keepalived global_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from Alexandre.Cassen@firewall.loc smtp_server localhost smtp_connect_timeout 30 router_id LVS_DEVEL } vrrp_instance VI_1 { state BACKUP interface eth1 virtual_router_id 61 priority 99 advert_int 1 authentication { auth_type PASS auth_pass 123456 } virtual_ipaddress { 192.168.1.121 } } vrrp_instance VI_2 { state MASTER interface eth1 virtual_router_id 71 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 123456 } virtual_ipaddress { 192.168.1.122 } } virtual_server 192.168.1.121 80 { delay_loop 6 lb_algo rr lb_kind DR nat_mask 255.255.255.0 protocol TCP real_server 192.168.1.192 80 { weight 1 HTTP_GET { url { path / status_code 200 } connect_timeout 2 nb_get_retry 3 delay_before_retry 1 } } real_server 192.168.1.193 80 { weight 1 HTTP_GET { url { path / status_code 200 } connect_timeout 2 nb_get_retry 3 delay_before_retry 1 } } } virtual_server 192.168.1.122 80 { delay_loop 6 lb_algo rr lb_kind DR nat_mask 255.255.255.0 protocol TCP real_server 192.168.1.192 80 { weight 1 HTTP_GET { url { path / status_code 200 } connect_timeout 2 nb_get_retry 3 delay_before_retry 1 } } real_server 192.168.1.193 80 { weight 1 HTTP_GET { url { path / status_code 200 } connect_timeout 2 nb_get_retry 3 delay_before_retry 1 } } }
测试
先启动node5节点的keepalived服务,可以看到两个VIP地址都配置在了node5的eth0网卡上。
ipvs规则的启动效果为,可以看到两组规则也都启动在了node5上。
启动node1的keepalived服务,查看相关信息
模拟node5节点服务器故障,此处直接停止node5的keepalived服务,查看node1信息都无误,两个VIP都到node1上面了
至此,实验完毕,如有错误之处,请多多指教。