nginx结合keepalived做高可用负载集群服务

一、概述

前面几篇介绍了nginx做web网站以及https网站,反向代理,LVS的NAT与DR调度负载,但由于nginx本身负载是七层,需要套接字,即使基于upstream模块模拟成四层代理也是瓶颈,因此本文介绍nginx基于keepalived做高可用负载均衡集群服务,目标就是两台keepalived对nginx反向代理负载服务做检查与调度,做成双主模式,nginx负载调度后端的httpd网站,任何一台调度器(nginx或keepalived服务)故障,不影响业务;后端任何一台web故障也不影响业务正常访问;
实验环境:
N1:
CentOS7 X64 nginx +keepalived ip:172.16.0.6 192.168.10.254 VIP:172.16.0.99
N2:
CentOS7 x64 nginx+ keepalived ip:172.16.0.9 192.168.10.18 VIP:172.16.0.100
httpd:
CentOS7 x64 nginx 在一台虚拟机上使用三块网卡ip: ip 192.168.10.19 - 21
架构如下:

说明:
在双主模式下,客户端访问www.san.com 通过互联网DNS解析访问到N1或N2提供的公网VIP,如解析到N1 172.16.0.99和N2 172.16.0.100,,就会一部分通过N1访问后端的web,一部分通过N2访问后端web服务,另外如果其中一台nginx故障则转移到另一台调度器上,如果是主备模式则同时只有一个做主调度器,另一台是备份调度器,是非活动的,只有当主调度器故障时,或手动降低优先级后才会激活使用;本次测试主要针对双主模式进行测试。不对DNS做解析测试,只做nginx keepalived的高可用负载集群测试;另外为了看到效果把三台httpd的内容分别修改成不一样,以示区别,现实中后端三台httpd提供一样的内容;

二、双主模式高可用负载

1、后端web配置
通过一台虚拟机三台网卡模拟三台
安装配置httpd

#yum install httpd -y
配置测试网站
[[email protected] ~]# cat /etc/httpd/conf.d/vhosts.conf
<VirtualHost 192.168.10.19:80>
    ServerName 192.168.10.19
    DocumentRoot "/data/web/vhost1"
    <Directory "/data/web/vhost1">
        Options FollowSymLinks
        AllowOverride None
        Require all granted
    </Directory>
</VirtualHost>

<VirtualHost 192.168.10.20:80>
    ServerName 192.168.10.20
    DocumentRoot "/data/web/vhost2"
    <Directory "/data/web/vhost2">
        Options FollowSymLinks
        AllowOverride None
        Require all granted
    </Directory>
</VirtualHost>

<VirtualHost 192.168.10.21:80>
    ServerName 192.168.10.21
    DocumentRoot "/data/web/vhost3"
    <Directory "/data/web/vhost3">
        Options FollowSymLinks
        AllowOverride None
        Require all granted
    </Directory>
</VirtualHost>
[[email protected] ~]# mkdir -pv /data/web/vhost{1,2,3}
[[email protected] ~]# cat /data/web/vhost1/index.html
<h1>Vhost1</h1>
[[email protected] ~]# cat /data/web/vhost2/index.html
<h1>Vhost2</h1>
[[email protected] ~]# cat /data/web/vhost3/index.html
<h1>Vhost3</h1>
启动web服务
[[email protected] ~]# systemctl start httpd

至此httpd服务配置完成

2、nginx负载配置
N1与N2分别安装nginx:
#yum install nginx -y
配置nginx负载如下:

# egrep -v ‘(^$|^#)‘ /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
    worker_connections 1024;
}
http {
    log_format  main  ‘$remote_addr - $remote_user [$time_local] "$request" ‘
                      ‘$status $body_bytes_sent "$http_referer" ‘
                      ‘"$http_user_agent" "$http_x_forwarded_for"‘;
    access_log  /var/log/nginx/access.log  main;
    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;
    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;
    upstream websrvs {                #定义后端节点,检查超时间隔1s,三次失败就表示就踢除
    server 192.168.10.19:80 fail_timeout=1 max_fails=3;
    server 192.168.10.20:80 fail_timeout=1 max_fails=3;
    server 192.168.10.21:80 fail_timeout=1 max_fails=3;
    }

    include /etc/nginx/conf.d/*.conf;
    server {
        listen       80 default_server;
        listen       [::]:80 default_server;
        server_name  _;
        root         /usr/share/nginx/html;
        # Load configuration files for the default server block.
        include /etc/nginx/default.d/*.conf;
        location / {
            proxy_pass http://websrvs;              #使用负载代理
        }
        error_page 404 /404.html;
            location = /40x.html {
        }
        error_page 500 502 503 504 /50x.html;
            location = /50x.html {
        }
    }
}
#启动nginx服务
#systemctl start nginx

3、安装keepalived高可用服务
N1,N2上都安装keepalived服务

#yum  install keepalived -y

N1 keepalived配置

[[email protected] keepalived]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
    [email protected]               #发件箱地址(启动本地postfix服务即可)
   }
   notification_email_from [email protected]
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id node1        #路由节点标识
   vrrp_mcast_group4 224.1.101.33   #多播地址
}
vrrp_script chk_down {
    script "/etc/keepalived/check.sh"        #通过脚本手动转移主备角色
    weight -10
    interval 1
    fall 1
    rise 1
}
vrrp_script chk_ngx {    #检查nginx 每隔2s 3次 失败 权重减少10
    script "killall -0 nginx && exit 0 || exit 1"
    weight -10
    interval 2
    fall 3
    rise 3

}

vrrp_instance VI_1 {    #高可用节点1
    state MASTER       #主节点
    priority 100              #优先级100
    interface ens33        #网卡接口
    virtual_router_id 33      #虚拟路由
    advert_int 1
    authentication {  #简单密码认证
        auth_type PASS
        auth_pass RT3SKUI2
    }
    virtual_ipaddress {   #节点1虚拟VIP 172.16.0.99
    172.16.0.99/24 dev ens33 label ens33:0
    }
    track_script {   #检查
        chk_down
        chk_ngx
    }
         #主备变换时调用脚本
    notify_master "/etc/keepalived/notify.sh master"
    notify_backup "/etc/keepalived/notify.sh backup"
    notify_fault "/etc/keepalived/notify.sh fault"

}
vrrp_instance VI_2 {    #高可用节点2
    state BACKUP     #备用节点
    priority 96             #优先级 96
    interface ens33     #网络接口
    virtual_router_id 43     #虚拟路由名称
    advert_int 1
    authentication {  #简单认证
        auth_type PASS
        auth_pass RT3SKUI3
    }
    virtual_ipaddress { #节点2 VIP  172.16.0.100
    172.16.0.100/24 dev ens33 label ens33:1
    }
    track_script {
        chk_down
        chk_ngx
    }
    track_interface { #通过检查网卡对集群健康检查
    ens33
    ens37
    }
    notify_master "/etc/keepalived/notify.sh master"
    notify_backup "/etc/keepalived/notify.sh backup"
    notify_fault "/etc/keepalived/notify.sh fault"

}

配置说明:
有两个集群节点,其中在节点1上N1做主角色,在节点2上N1做备份角色;

N2 keepalived配置

[[email protected] keepalived]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
        [email protected]
   }
   notification_email_from [email protected]
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id node2
   vrrp_mcast_group4 224.1.101.33   #多播地址
}
vrrp_script chk_down {
    script "/etc/keepalived/check.sh"
    weight -10
    interval 1
    fall 1
    rise 1
}
vrrp_script chk_ngx {
    script "killall -0 nginx && exit 0 || exit 1"
    weight -10
    interval 2
    fall 3
    rise 3

}

vrrp_instance VI_1 {
    state BACKUP
    priority 96
    interface ens33
    virtual_router_id 33
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass RT3SKUI2
    }
    virtual_ipaddress {
    172.16.0.99/24 dev ens33 label ens33:0
    }
    track_script {
        chk_down
        chk_ngx
    }
    notify_master "/etc/keepalived/notify.sh master"
    notify_backup "/etc/keepalived/notify.sh backup"
    notify_fault "/etc/keepalived/notify.sh fault"

}
vrrp_instance VI_2 {
    state MASTER
    priority 100
    interface ens33
    virtual_router_id 43
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass RT3SKUI3
    }
    virtual_ipaddress {
    172.16.0.100/24 dev ens33 label ens33:1
    }
    track_script {
        chk_down
        chk_ngx
    }
    track_interface {
        ens33
        ens37
    }
    notify_master "/etc/keepalived/notify.sh master"
    notify_backup "/etc/keepalived/notify.sh backup"
    notify_fault "/etc/keepalived/notify.sh fault"
}

配置说明:
在N2上也有两个节点,在节点1上是备份角色,在节点2上是主角色;和N1刚好相反;

4、相关脚本

check.sh脚本
# cat /etc/keepalived/check.sh
[ -f /etc/keepalived/down ] && exit 1 || exit 0
notify.sh脚本
#!/bin/bash
#
contact=‘[email protected]‘

notify() {
    local mailsubject="$(hostname) to be $1, vip floating"
    local mailbody="$(date +‘%F %T‘): vrrp transition, $(hostname) changed to be $1"
    echo "$mailbody" | mail -s "$mailsubject" $contact
}

case $1 in
master)
    systemctl start nginx
    notify master
    ;;
backup)
        systemctl start nginx
    notify backup
    ;;
fault)
        systemctl stop nginx
    notify fault
    ;;
*)
    echo "Usage: $(basename $0) {master|backup|fault}"
    exit 1
    ;;
esac

说明:
对于notify.sh脚本依赖postfix服务和mailx软件包中的mail工具,如果系统中没有安装可以通过yum install postfix mailx -y安装即可。

#到此配置完成,接下测试

三、测试

1、启动N1 keepalived服务:
查看keepalived服务状态

[[email protected] keepalived]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
……省略……
Jan 21 15:42:36 n1.san.com Keepalived_vrrp[13234]: Sending gratuitous ARP on ens33 for 172.16.0.99
Jan 21 15:42:39 n1.san.com Keepalived_vrrp[13234]: VRRP_Instance(VI_2) Sending/queueing gratuitous ARPs on ens33 for 172.16.0.100
Jan 21 15:42:39 n1.san.com Keepalived_vrrp[13234]: Sending gratuitous ARP on ens33 for 172.16.0.100

查看网卡状态:
[[email protected] ~]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.16.0.6  netmask 255.255.255.0  broadcast 172.16.0.255
        inet6 fe80::96b9:e601:fd10:1888  prefixlen 64  scopeid 0x20<link>
        inet6 fe80::618d:61c4:52d7:9619  prefixlen 64  scopeid 0x20<link>

ens33:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.16.0.99  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:8b:6e:09  txqueuelen 1000  (Ethernet)

ens33:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.16.0.100  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:8b:6e:09  txqueuelen 1000  (Ethernet)

可以看到在N2没有启动时(即N2 keepalived服务不可用时)N1成为主要的高可用调度节点;同时抢占了两个VIP对外服务;即当访问www.san.com域名时解析到的都是N1上的两个节点,并不影响业务;

2、启动N2 keepalived服务

#systemctl start keepalived
此时看到N1 VIP是172.16.0.99
N2 VIP 172.16.0.100

查看N1状态:
[[email protected] ~]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
 ……省略……
Jan 21 15:42:39 n1san.com Keepalived_vrrp[13234]: Sending gratuitous ARP on ens33 for 172.16.0.100
Jan 21 15:59:40 n1.san.com Keepalived_vrrp[13234]: VRRP_Instance(VI_2) Received advert with higher priority 100, ours 96
Jan 21 15:59:40 n1.san.com Keepalived_vrrp[13234]: VRRP_Instance(VI_2) Entering BACKUP STATE
Jan 21 15:59:40 n1.san.com Keepalived_vrrp[13234]: VRRP_Instance(VI_2) removing protocol VIPs.
Jan 21 15:59:40 n1.san.com Keepalived_vrrp[13234]: Opening script file /etc/keepalived/notify.sh
查看网卡:
[[email protected] ~]# ifconfig
ens33:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.16.0.99  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:8b:6e:09  txqueuelen 1000  (Ethernet)
只有VIP 172.16.0.99

说明:由于N1上是172.16.0.99的主模式是172.16.0.100的备份模式,所以在N2没有启动时,两个VIP都在N1上,当N2动时,172.16.0.100被N2抢占,因此变为backup备份模式;

查看N2的keepalived服务状态:

[[email protected] keepalived]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
……省略……
Jan 21 15:59:46 n2.san.com Keepalived_vrrp[12877]: Sending gratuitous ARP on ens33 for 172.16.0.100
Jan 21 15:59:46 n2.san.com Keepalived_vrrp[12877]: VRRP_Instance(VI_2) Sending/queueing gratuitous ARPs on ens33 for 172.16.0.100
Jan 21 15:59:46 n2.san.com Keepalived_vrrp[12877]: Sending gratuitous ARP on ens33 for 172.16.0.100
……省略……

#查看网卡状态
[[email protected] keepalived]# ifconfig
ens33:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.16.0.100  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:03:2e:91  txqueuelen 1000  (Ethernet)

说明 :由于N2是后启动,所以直接在N2上是主模式,所以直接抢占VIP 172.16.0.100

3、模拟nginx服务岩机
关闭N1上nginx服务

[[email protected] ~]# for i in {1..20};do sleep 1;killall nginx;done
打开N1的另一个终端,查看nginx keepalived状态 :
[[email protected] ~]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
  ……省略……
Jan 21 16:44:22 n1.magedu.com Keepalived_vrrp[20354]: /usr/bin/killall -0 nginx && exit 0 || exit 1 exited with status 1
You have new mail in /var/spool/mail/root
[[email protected] ~]# systemctl status nginx
● nginx.service - The nginx HTTP and reverse proxy server
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
   Active: inactive (dead) since Sun 2018-01-21 16:42:48 CST; 1min 43s ago
 ……省略……

查看N2上的keepalived服务状态和VIP
[[email protected] keepalived]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor

Jan 21 16:42:49 n2.san.com Keepalived_vrrp[17567]: Sending gratuitous ARP on ens33 for 172.16.0.99
Jan 21 16:42:49 n2.san.com Keepalived_vrrp[17567]: Opening script file /etc/keepalived/notify.sh
Jan 21 16:42:54 n2.san.com Keepalived_vrrp[17567]: Sending gratuitous ARP on ens33 for 172.16.0.99
Jan 21 16:42:54 n2.san.com Keepalived_vrrp[17567]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on ens33 for 172.16.0.99
Jan 21 16:42:54 n2.san.com Keepalived_vrrp[17567]: Sending gratuitous ARP on ens33 for 172.16.0.99
查看N2的VIP
[[email protected] keepalived]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.16.0.9  netmask 255.255.255.0  broadcast 172.16.0.255
……省略……
ens33:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.16.0.99  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:03:2e:91  txqueuelen 1000  (Ethernet)
ens33:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.16.0.100  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:03:2e:91  txqueuelen 1000  (Ethernet)

说明:在N1的nginx宕机后N2抢占了VIP 172.16.0.99使得N2拥有了172.16.0.100和172.16.0.99两个对外VIP,因此不影响客户端访问www.san.com网站业务
在这种情况下只有手动启动N1的nginx服务(修复nginx)故障才能让N1的keepalive获取VIP 172.16.0.99再次形成双主模式高可用负载集群;

4、对后端httpd模拟故障
对于httpd的故障,nginx会自动 去除有故障的节点;

禁用192.168.10.21 网卡
[[email protected] ~]# ifconfig ens39 down
测试访问:
[[email protected] ~]# curl http://172.16.0.100
<h1>Vhost1</h1>
[[email protected] ~]# curl http://172.16.0.100
<h1>Vhost2</h1>
[[email protected] ~]# curl http://172.16.0.100
<h1>Vhost1</h1>
[[email protected] ~]# curl http://172.16.0.99
<h1>Vhost1</h1>
[[email protected] ~]# curl http://172.16.0.99
<h1>Vhost2</h1>
[[email protected] ~]# curl http://172.16.0.99
<h1>Vhost1</h1>

可以看到在192.168.10.21 web服务不能访问后整个集群依然可能访问,需要注意的是真实情况 下是访问域名的如www.san.com 来自动获取VIP和后端负载调度的;
本次过程繁多,难免有错误或遗漏之处,如有问题欢迎留言指正;谢谢~

原文地址:http://blog.51cto.com/dyc2005/2063399

时间: 2024-08-07 21:18:41

nginx结合keepalived做高可用负载集群服务的相关文章

Nginx+Tomcat+Keepalived实现高可用web集群

Nginx+Tomcat+Keepalived实现高可用web集群: 环境:CenOS 6.5Nginx-Master:10.10.10.128Nginx-Backup:10.10.10.129Tomcat1:10.10.10.130Tomcat2:10.10.10.131VIP:10.10.10.100 一.环境基础配置1.更换国内yum源2.关闭防火墙.SELinux3.时间同步 二.Web端安装Web服务 1.查看是否安装JDK [[email protected] ~]# java -v

Keepalived+LVS实现web高可用负载集群

目标:基于LVS的DR模型实现keepalived高可用负载集群 拓扑图: 一.准备好DR模型的Real server1和Realserver2两个节点 1.此处我就用脚本来快速的部署这两个节点,脚本内容如下: #!/bin/bash vip=192.168.1.100 interface="lo:0" case $1 in               start)                   echo 1 > /proc/sys/net/ipv4/conf/all/ar

搭建LVS+Keepalived高可用负载集群

搭建LVS+Keepalived高可用负载集群 最近,本屌接到公司的任务,公司新上20台服务器,需要搭建一整套架构来运行公司的业务,其中有应用服务器,认证服务器,数据库服务器等.服务器基础架构中的应用服务器集群要有高可用性,且需要负载均衡.当我接到这个任务的时候,脑子里第一个想法就是LVS+Keepalived. 由于公司资金有限,直接上硬件的负载均衡设备是不可能的了,所以只好使用软件来实现,LVS在负载均衡集群中无疑是一种很好的方案,使用LVS可以同时分发10台以下的设备,用在我们这个项目中是

Centos 7部署docker+nginx+keepalived实现高可用web集群

一.体系架构 在Keepalived + Nginx高可用负载均衡架构中,keepalived负责实现High-availability (HA) 功能控制前端机VIP(虚拟网络地址),当有设备发生故障时,热备服务器可以瞬间将VIP自动切换过来,实际运行中体验只有2秒钟切换时间,DNS服务可以负责前端VIP的负载均衡.nginx负责控制后端web服务器的负载均衡,将客户端的请求按照一定的算法转发给后端Real Server处理,而Real Server将响应直接返回给客户端. 二.简单原理 NG

nginx结合keepalived实现高可用负载均衡

1 规划和准备 需要统一接入的应用系统   应用系统 域名/虚拟目录 应用服务器及URL svn dev.mycompany.com/svn http://50.1.1.21/svn svn web管理 dev.mycompany.com/submin http://50.1.1.21/submin 网站 www.mycompany.com http://50.1.1.10; http://50.1.1.11; http://50.1.1.12 OA oa.mycompany.com http:

CentOS7+MySQL/MariaDB+Galera+HAProxy+Keepalived构建高可用数据库集群

方案优势: Galera能够实现MySQL/MariaDB数据库的主主复制和多主复制等模式,这些复制模式都是同步进行的,同步时间非常短 每一个节点都可以同时写入和读取,当某一节点发生故障时,可自动从集群中自动剔除 HAProxy能提供负载均衡和故障判断等功能解决服务器系统存在的单点故障 Keepalived能提供客户端连接数据库时使用的虚拟IP地址(VIP) 关于HAProxy的负载均衡算法 轮询方式(roundrobin):不适合用于backend为web服务器的情况,因为session.co

mysql主主复制+keepalived 打造高可用mysql集群

为了响应公司需求,打造出更安全的mysql集群,能够实现mysql故障后切换,研究了几天终于有了成果,一起分享一下. 首先介绍一下这套集群方案实现的功能 1.mysql服务器故障后自动转移,修好后自动切回 2.mysql服务故障自动转移,修好后自动切回 3.可以实现在几秒钟内转移 以下内容均是实验环境,请根据实际情况修改响应参数 实验环境: mysql1 ip:10.1.1.20 mysql2  ip:10.1.1.21 mysql vip:10.1.1.25 三台机器均安装centos 6.5

借助keepalived实现高可用haproxy集群

一.环境准备 系统版本:CentOS-6.6  kernel:2.6.32-504.el6.x86_64 两个http节点: 172.16.113.13:80 172.16.113.14:80 两个haproxy节点: 172.16.13.13 172.16.13.14 keepalived双主vip: 172.16.13.1 172.16.13.2 二.网络拓扑 三.httpd安装与测试        yum install httpd -y        vim /var/www/html/

corosync+pacemaker做高可用web集群

实验环境: 两台主机:  centos6.5+httpd2.4+php5.5组成web的基本环境,并且web页面访问正常,并且确保httpd24服务不会开机启动. node1.mylinux.com      10.204.80.79 node2.mylinux.com      10.204.80.80 我这里启用ansible来方便对两个节点的管理,启用一台主机做为管理节点,IP:10.204.80.71,在三台主机的hosts里同都添加对应的名称解析,管理节点对node1,node2两个节