keepalived基于nginx实现双主模型

准备环境

Director 1 Director 2 Web Server Nginx1 Web Server Nginx
DIP:172.18.42.100 DIP:172.18.42.22 RIP:172.18.42.111 RIP:172.18.42.222

VIP:172.18.42.119(MASTER)

VIP:172.18.42.120(BACKUP)


VIP:172.18.42.119(BACKUP)

VIP:172.18.42.120(MASTER)


VIP:172.18.42.119

VIP:172.18.42.120


VIP:172.18.42.119

VIP:172.18.42.120

一、Director 1 配置

1、yum安装keepalived

[[email protected] ~]# yum install keepalived -y

2、编辑配置文件

[[email protected] ~]# vim /etc/keepalived/keepalived.conf
global_defs {
   notification_email {
        [email protected]
   }
   notification_email_from www.mageedu.com
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id Slackware
   vrrp_mcast_group4 224.0.42.10
}
 
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 110
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.18.42.119 dev eth0 label eth0:1
    }
}

virtual_server 172.18.42.119 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    protocol TCP
    real_server 172.18.42.111 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
    real_server 172.18.42.222 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
} 
vrrp_instance VI_2 {
    state BACKUP
    interface eth0
    virtual_router_id 111
    priority 98
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.18.42.120 dev eth0 label eth0:2
    }
}
 
virtual_server 172.18.42.120 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    protocol TCP
    real_server 172.18.42.111 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
    real_server 172.18.42.222 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

二、Director 2 配置

1、安装keepalived服务

[[email protected]~]# yum install keepalived -y

2、编辑配置文件

[[email protected] ~]# vim /etc/keepalived/keepalived.conf
global_defs {
   notification_email {
        [email protected]
   }
   notification_email_from www.mageedu.com
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id Slackware
   vrrp_mcast_group4 224.0.42.10
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 110
    priority 98
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.18.42.119 dev eth0 label eth0:1
    }
}

virtual_server 172.18.42.119 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    protocol TCP
    real_server 172.18.42.111 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
    real_server 172.18.42.222 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

vrrp_instance VI_2 {
    state MASTER
    interface eth0
    virtual_router_id 111
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.18.42.120 dev eth0 label eth0:2
    }
}

virtual_server 172.18.42.120 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    protocol TCP
    real_server 172.18.42.111 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
    real_server 172.18.42.222 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

三、Web Server Nginx1配置

1、修改Web Server Nginx1的内核参数

[[email protected] ~]# echo 1 &> /proc/sys/net/ipv4/conf/all/arp_ignore 
[[email protected] ~]# echo 1 &> /proc/sys/net/ipv4/conf/lo/arp_ignore 
[[email protected] ~]# echo 2 &> /proc/sys/net/ipv4/conf/lo/arp_announce 
[[email protected] ~]# echo 2 &> /proc/sys/net/ipv4/conf/all/arp_announce

2、添加VIP

[[email protected] ~]# ifconfig lo:1 172.18.42.119 netmask 255.255.255.255 broadcast 172.18.42.119
[[email protected] ~]# ifconfig lo:2 172.18.42.120 netmask 255.255.255.255 broadcast 172.18.42.120
[[email protected] ~]# route add -host 172.18.42.119 dev lo:1
[[email protected] ~]# route add -host 172.18.42.120 dev lo:2

3、安装nginx、并启动

[[email protected] ~]# yum install nginx -y
[[email protected] ~]# nginx

四、Web Server Nginx2配置

1、修改Web Server Nginx2的内核参数

[[email protected]ocalhost ~]# echo 1 &> /proc/sys/net/ipv4/conf/all/arp_ignore 
[[email protected] ~]# echo 1 &> /proc/sys/net/ipv4/conf/lo/arp_ignore 
[[email protected] ~]# echo 2 &> /proc/sys/net/ipv4/conf/lo/arp_announce 
[[email protected] ~]# echo 2 &> /proc/sys/net/ipv4/conf/all/arp_announce

2、添加VIP

[[email protected] ~]# ifconfig lo:1 172.18.42.119 netmask 255.255.255.255 broadcast 172.18.42.119
[[email protected] ~]# ifconfig lo:2 172.18.42.120 netmask 255.255.255.255 broadcast 172.18.42.120
[[email protected] ~]# route add -host 172.18.42.119 dev lo:1
[[email protected] ~]# route add -host 172.18.42.120 dev lo:2

3、安装nginx,并启动

[[email protected] ~]# yum install nginx -y
[[email protected] ~]# nginx

五、开启两台Director,并测试

1、Director 1

[[email protected] ~]# service keepalived start
[[email protected] ~]# ifconfig 
eth0      Link encap:Ethernet  HWaddr 00:0C:29:90:84:C0  
          inet addr:172.18.42.100  Bcast:172.18.255.255  Mask:255.255.0.0
          inet6 addr: fe80::20c:29ff:fe90:84c0/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:182021 errors:0 dropped:0 overruns:0 frame:0
          TX packets:48384 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:28094276 (26.7 MiB)  TX bytes:4476727 (4.2 MiB)
eth0:1    Link encap:Ethernet  HWaddr 00:0C:29:90:84:C0  
          inet addr:172.18.42.119  Bcast:0.0.0.0  Mask:255.255.255.255
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:663 errors:0 dropped:0 overruns:0 frame:0
          TX packets:663 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:49597 (48.4 KiB)  TX bytes:49597 (48.4 KiB)
[[email protected] ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.18.42.119:80 rr
  -> 172.18.42.111:80             Route   1      0          0         
  -> 172.18.42.222:80             Route   1      0          1         
TCP  172.18.42.120:80 rr
  -> 172.18.42.111:80             Route   1      0          0         
  -> 172.18.42.222:80             Route   1      0          0

## Director 1的VIP已经启动;同时ipvsadm的规则自动生成

2、Director 2

[[email protected] ~]# ifconfig 
eth0      Link encap:Ethernet  HWaddr 00:0C:29:0F:A3:6E  
          inet addr:172.18.42.22  Bcast:172.18.255.255  Mask:255.255.0.0
          inet6 addr: fe80::20c:29ff:fe0f:a36e/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:134293 errors:0 dropped:0 overruns:0 frame:0
          TX packets:69576 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:16446378 (15.6 MiB)  TX bytes:5481841 (5.2 MiB)
 
eth0:2    Link encap:Ethernet  HWaddr 00:0C:29:0F:A3:6E  
          inet addr:172.18.42.120  Bcast:0.0.0.0  Mask:255.255.255.255
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
 
lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:3431 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3431 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:291422 (284.5 KiB)  TX bytes:291422 (284.5 KiB)
 
[[email protected] ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.18.42.119:80 rr
  -> 172.18.42.111:80             Route   1      0          0         
  -> 172.18.42.222:80             Route   1      0          0         
TCP  172.18.42.120:80 rr
  -> 172.18.42.111:80             Route   1      0          0         
  -> 172.18.42.222:80             Route   1      0          0

试着访问Web服务

[[email protected] ~]# curl http://172.18.42.119
<h1>
172.18.42.222 Web Server 2
</h1>
[[email protected] ~]# curl http://172.18.42.119
<h1>
172.18.42.111 Web Server 1
</h1>
[[email protected] ~]# curl http://172.18.42.120
<h1>
172.18.42.222 Web Server 2
</h1>
[[email protected] ~]# curl http://172.18.42.120
<h1>
172.18.42.111 Web Server 1

##实现了负载均衡、双主模型

六、把Director 1关闭掉;会发生什么样的情况?

Director 1

[[email protected] ~]# service keepalived stop

Director 2

[[email protected] ~]# ifconfig 
eth0      Link encap:Ethernet  HWaddr 00:0C:29:0F:A3:6E  
          inet addr:172.18.42.22  Bcast:172.18.255.255  Mask:255.255.0.0
          inet6 addr: fe80::20c:29ff:fe0f:a36e/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:142076 errors:0 dropped:0 overruns:0 frame:0
          TX packets:73562 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:17381764 (16.5 MiB)  TX bytes:5790752 (5.5 MiB)
 
eth0:1    Link encap:Ethernet  HWaddr 00:0C:29:0F:A3:6E  
          inet addr:172.18.42.119  Bcast:0.0.0.0  Mask:255.255.255.255
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
 
eth0:2    Link encap:Ethernet  HWaddr 00:0C:29:0F:A3:6E  
          inet addr:172.18.42.120  Bcast:0.0.0.0  Mask:255.255.255.255
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
 
lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:3435 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3435 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:291770 (284.9 KiB)  TX bytes:291770 (284.9 KiB)
 
[[email protected] ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.18.42.119:80 rr
  -> 172.18.42.111:80             Route   1      0          0         
  -> 172.18.42.222:80             Route   1      0          0         
TCP  172.18.42.120:80 rr
  -> 172.18.42.111:80             Route   1      0          1         
  -> 172.18.42.222:80             Route   1      0          1

##当把Director 1关闭掉之后,此时Director 2会把Director的VIP夺过来

我们再来访问Web服务

[[email protected] ~]# curl http://172.18.42.120
<h1>
172.18.42.222 Web Server 2
</h1>
[[email protected] ~]# curl http://172.18.42.120
<h1>
172.18.42.111 Web Server 1
</h1>
[[email protected] ~]# curl http://172.18.42.119
<h1>
172.18.42.222 Web Server 2
</h1>
[[email protected] ~]# curl http://172.18.42.119
<h1>
172.18.42.111 Web Server 1
</h1>

##服务能够正常提供

七、如果Web2服务挂掉了,会发生什么样的情况?

Web Server Nginx2

[[email protected] ~]# nginx -s stop

Director

[[email protected] ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.18.42.119:80 rr
  -> 172.18.42.222:80             Route   1      0          1         
TCP  172.18.42.120:80 rr
  -> 172.18.42.222:80             Route   1      0          1

##服务器自动剔除Web1的IP

再次尝试访问Web服务

[[email protected] ~]# curl http://172.18.42.119
<h1>
172.18.42.222 Web Server 2
</h1>
[[email protected] ~]# curl http://172.18.42.119
<h1>
172.18.42.222 Web Server 2
</h1>
[[email protected] ~]# curl http://172.18.42.120
<h1>
172.18.42.222 Web Server 2
</h1>
[[email protected] ~]# curl http://172.18.42.120
<h1>
172.18.42.222 Web Server 2
</h1>

##响应请求的只有Web Server Nginx2了

如果开启Web Server Nginx1;服务器会自动添加规则吗?

[[email protected] ~]# nginx
[[email protected] ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.18.42.119:80 rr
  -> 172.18.42.111:80             Route   1      0          0         
  -> 172.18.42.222:80             Route   1      0          0         
TCP  172.18.42.120:80 rr
  -> 172.18.42.111:80             Route   1      0          0         
  -> 172.18.42.222:80             Route   1      0          0

##Web Server Nginx1的规则自动添加

问题总结:

1、对于Director中的keepalived配置文件;两个vrrp示例的“vrrp_router_id”必须不能一样

2、每台Director的时间需要同步;可基于ntpdate实现

时间: 2024-10-11 03:44:37

keepalived基于nginx实现双主模型的相关文章

keepalived + haproxy 实现web 双主模型的高可用负载均衡

1.本文的架构图: 阐述各服务器用途: 1.haproxy在本构架中实现的是:负载均衡 2.keepalived实现对haproxy的高可用 3.apache static 实现静态页面的访问 4.aoache dynamic实现动态页面的访问,图中有两个是实现负载均衡的 配置各功能模块: 一.配置haproxy和keepalived 验证: 1.当一台keepalived宕机后,VIP会不会转移到另外一台服务器 2.当一台haproxy服务出故障,VIP会不会转移到另外一台服务器 注意: 那如

keepalived基于双主模型实现nginx的高可用(2)

Keepalived: keepalived是基于vrrp协议实现的一个高可用集群解决方案,可以利用keepalived来解决单点故障问题,使用keepalived实现的高可用集群方案中,一般有两台服务器,一个是MASTER(主服务器),另一个是BACKUP(备用服务器),这个集群中对外提供一个虚拟IP,MASTER服务器会定时发送特定信息给BACKUP服务器,当BACKUP服务器接收不到MASTER发送的消息时,BACKUP服务器会接管虚拟IP,继续提供服务. 当keepalived基于主备模

Mogilefs分布式文件系统-Keepalived+Nginx双主模型实现图片分布式存储、访问

一.分布式文件系统: 分布式文件系统(Distributed File System)是指文件系统管理的物理存储资源不一定直接连接在本地节点上,而是通过计算机网络与节点相连.计算机通过文件系统管理.存储数据,单纯通过增加硬盘个数来扩展计算机文件系统的存储容量的方式,在容量大小.容量增长速度.数据备份.数据安全等方面的表现都差强人意. 分布式文件系统可以有效解决数据的存储和管理难题:将固定于某个地点的某个文件系统,扩展到任意多个地点/多个文件系统,众多的节点组成一个文件系统网络.每个节点可以分布在

keepalived+lvs-dr+nginx双主模型

前言 本文主要介绍,使用keepalived+lvs实现负载均衡及高可用功能,后端webserver使用nginx,keepalived使用双主模型. keepalived基于VRRP实现: VRRP的工作过程为: (1)  虚拟路由器中的路由器根据优先级选举出 Master.Master 路由器通过发送免费 ARP 报文,将自己的虚拟 MAC 地址通知给与它连接的设备或者主机,从而承担报文转发任务: (2)  Master 路由器周期性发送 VRRP 报文,以公布其配置信息(优先级等)和工作状

keepAlived+nginx实现高可用双主模型LVS

实验目的: 利用keepalived实现高可用反向代理的nginx.以及双主模型的ipvs 实验环境: node1:在nginx做代理时做反向代理节点,在keepalived实现LVS时做Director.VIP1:172.16.18.22 VIP2:172.16.18.23 node2:在nginx做代理时做反向代理节点,在keepalived实现LVS时做Director.VIP1:172.16.18.22 VIP2:172.16.18.23 node3:在nginx做代理时做web服务器.

基于keepalived实现双主模型高可用lvs

实验环境,使用的操作系统CentOS6.5: Director: node1:IP 172.16.103.2 安装keepalived VIP:172.16.103.20 node2:IP 172.16.103.3 安装keepalived VIP:172.16.103.30 RS: RS1:IP 172.16.103.1 提供httpd服务 RS2:IP 172.16.103.4 提供httpd服务 实验效果: 前端的两台Director运行keepalived,自动生成各自的一组lvs规则,

keepalived实现nginx的高可用(双主模型)

实验环境: RS1:rip(172.16.125.7),安装httpd软件包: RS2:rip(172.16.125.8),安装httpd软件包: director1(7-1.lcs.com):vip(172.16.125.100),dip(172.16.125.5),安装nginx.keepalived软件包. director2(7-2.lcs.com):vip(172.16.125.110),dip(172.16.125.6),安装nginx.keepalived软件包. 首先关闭所有节点

keepalived+nginx 双主模型实现高可用服务

一.keepalived的工作原理 keepalived是以VRRP协议为实现基础的,VRRP全称Virtual Router Redundancy Protocol,即虚拟路由冗协议. 虚拟路由冗余协议,可以认为是实现路由器高可用的协议,即将N台提供相同功能的路由器组成一个虚拟路由器组,这个组里面有一个master和多个backup,master上面有一个对外提供服务的vip(该路由器所在局域网内其他机器的默认路由为该vip),master会发组播,当backup收不到vrrp包时就认为mas

haproxy+keepalived双主模型及动静分离的实现

实验目标: 1.haproxy统计页面的输出机制: 2.haproxy动静分离机制: 3.基于keepalived的高可用实现: 环境: vm8虚拟机 操作系统: centos 6.4 内核版本: 2.6.32-358.el6.x86_64 注: (1) 每个haproxy各有两块网卡,外网网卡选择Bridge,内网网卡选择Vmnet2; (2) 内部两台web服务器的网卡都是选择Vmnet2; 一.准备工作: 1.各节点IP地址相关设置 node1:  ifconfig eth1 192.16