双节点(nginx+keepalived)为两台apache服务器提供负载均衡

说明:本实验为双节点nginx为两台apache服务器提供负载均衡,本文不是做lvs,所以realserver不是配置在keepalived.conf而是在nginx的配置文件中upstream。
此架构需考虑的问题:
1)Master没挂,则Master占有vip且nginx运行在Master上
2)Master挂了,则backup抢占vip且在backup上运行nginx服务
3)如果master服务器上的nginx服务挂了,则vip资源转移到backup服务器上
4)检测后端服务器的健康状态
Master和Backup两边都开启nginx服务,无论Master还是Backup,当其中的一个keepalived服务停止后,vip都会漂移到keepalived服务还在的节点上,如果要想使nginx服务挂了,vip也漂移到另一个节点,则必须用脚本或者在配置文件里面用shell命令来控制。

配置步骤如下
1.初始化4台测试server,该关的关了

[[email protected] ~]# vim /etc/hosts
192.168.1.200   ng-vip
192.168.1.101   ng-master
192.168.1.102   ng-slave
192.168.1.161   web1
192.168.1.162   web2

[[email protected] ~]# yum clean all
[[email protected] ~]# systemctl stop firewalld.service
[[email protected] ~]# systemctl disable firewalld.service
[[email protected] ~]# sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

2.配置web1,web2的apache服务,两台一样的方法

[[email protected] ~]# yum -y install httpd
[[email protected] ~]# systemctl start httpd
[[email protected] ~]# systemctl enable httpd
ln -s ‘/usr/lib/systemd/system/httpd.service‘ ‘/etc/systemd/system/multiuser.target.wants/httpd.service‘
[[email protected] ~]# cat /var/www/html/index.html
hello this lvs-web1

[[email protected] ~]# yum -y install httpd
[[email protected] ~]# systemctl start httpd
[[email protected] ~]# systemctl enable httpd
ln -s ‘/usr/lib/systemd/system/httpd.service‘ ‘/etc/systemd/system/multiuser.target.wants/httpd.service‘
[[email protected] ~]# cat /var/www/html/index.html
hello this lvs-web2

3.通过yum安装配置nginx节点,两台一样的方法

[[email protected] ~]# vim /etc/yum.repos.d/nginx.repo
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/x86_64/
gpgcheck=0
enabled=1
[[email protected] ~]# yum clean all
[[email protected] ~]# yum -y install nginx
[[email protected] ~]# vim /usr/share/nginx/html/index.html
<h1>Welcome to ng-master!</h1>
[[email protected] ~]# cd /etc/nginx/conf.d/
[[email protected] conf.d]# mv default.conf default.conf.1
[[email protected] ~]# vim /etc/nginx/conf.d/web.conf
    upstream myapp1 {
        server web1;
        server web2;
    }

    server {
        listen 80;

        location / {
            proxy_pass http://myapp1;
        }
    }
[[email protected] ~]# systemctl restart nginx.service

[[email protected] ~]# vim /etc/yum.repos.d/nginx.repo
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/x86_64/
gpgcheck=0
enabled=1
[[email protected] ~]# yum clean all
[[email protected] ~]# yum -y install nginx
[[email protected] ~]# vim /usr/share/nginx/html/index.html
<h1>Welcome to ng-master!</h1>
[[email protected] ~]# cd /etc/nginx/conf.d/
[[email protected] conf.d]# mv default.conf default.conf.1
[[email protected] ~]# vim /etc/nginx/conf.d/web.conf
    upstream myapp1 {
        server web1;
        server web2;
    }
    server {
        listen 80;

        location / {
            proxy_pass http://myapp1;
        }
    }
[[email protected] ~]# systemctl restart nginx.service

4.在主nginx服务器上安装keepalived,并配置nginx服务健康检测脚本

[[email protected] conf.d]# yum -y install keepalived
[[email protected] conf.d]# cd /etc/keepalived/
[[email protected] keepalived]# cp keepalived.conf keepalived.conf.1
[[email protected] keepalived]# vim keepalived.conf
global_defs {
   notification_email {
     [email protected]
   }
   notification_email_from [email protected]
   smtp_server smtp.mail.com
   smtp_connect_timeout 30
   router_id HA_MASTER1  #表示运行keepalived服务器的一个标识,发邮件时显示在邮件主题中的信息
}
vrrp_script chk_http_port {
script "/usr/local/keepalived/nginx.sh" ####检测nginx状态的脚本链接
interval 2
weight 2
}
vrrp_instance VI_2 {   #vrrp实例
    state MASTER     #MASTER/BACKUP
    interface eno16777736    ####HA 监测网络接口
    virtual_router_id 51  #虚拟路由标识,是一个数字,同一个VRRP实例使用唯一的标识,master和backup要一样
    priority 100          #用于主从模式,优先级主高于100,从低于100
    advert_int 1           #主备之间的通告间隔秒数
    authentication {        #认证用于主从模式,mater和backup配置一样
        auth_type PASS          ###主备切换时的验证
        auth_pass 1111          #密码
    }
track_script {
chk_http_port ### 执行监控的服务
}
    virtual_ipaddress {

 192.168.1.200/24 dev eno16777736 label eno16777736:1   ###########虚拟ip
    }
}
[[email protected] keepalived]# mkdir -p /usr/local/keepalived
[[email protected] keepalived]# vim /usr/local/keepalived/nginx.sh
#!/bin/bash
if [ `ps -C nginx --no-header |wc -l` -eq 0 ];then
killall keepalived
fi
[[email protected] keepalived]# chmod 755 /usr/local/keepalived/nginx.sh
[[email protected] keepalived]# systemctl start keepalived
[[email protected] keepalived]# ifconfig -a
eno16777736: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.101  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 fe80::20c:29ff:fefe:6f3  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:fe:06:f3  txqueuelen 1000  (Ethernet)

eno16777736:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.200  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:fe:06:f3  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)

5.在备nginx服务器上安装keepalived,并配置nginx服务健康检测脚本,与主略有不同

[[email protected] conf.d]# yum -y install keepalived
[[email protected] conf.d]# cd /etc/keepalived/
[[email protected] keepalived]# cp keepalived.conf keepalived.conf.1
[[email protected] keepalived]# vim keepalived.conf
global_defs {
   notification_email {
     [email protected]
   }
   notification_email_from [email protected]
   smtp_server smtp.mail.com
   smtp_connect_timeout 30
   router_id HA_MASTER1  #表示运行keepalived服务器的一个标识,发邮件时显示在邮件主题中的信息
}
vrrp_script chk_http_port {
script "/usr/local/keepalived/nginx.sh" ####检测nginx状态的脚本链接
interval 2
weight 2
}
vrrp_instance VI_2 {   #vrrp实例
    state BACKUP     #MASTER/BACKUP
    interface eno16777736    ####HA 监测网络接口
    virtual_router_id 51  #虚拟路由标识,是一个数字,同一个VRRP实例使用唯一的标识,master和backup要一样
    priority 80          #用于主从模式,优先级主高于100,从低于100
    advert_int 1           #主备之间的通告间隔秒数
    authentication {        #认证用于主从模式,mater和backup配置一样
        auth_type PASS          ###主备切换时的验证
        auth_pass 1111          #密码
    }
track_script {
chk_http_port ### 执行监控的服务
}
    virtual_ipaddress {

 192.168.1.200/24 dev eno16777736 label eno16777736:1   ###########虚拟ip
    }
}
[[email protected] keepalived]# mkdir -p /usr/local/keepalived
[[email protected] keepalived]# vim /usr/local/keepalived/nginx.sh
#!/bin/bash
if [ `ps -C nginx --no-header |wc -l` -eq 0 ];then
killall keepalived
fi
[[email protected] keepalived]# chmod 755 /usr/local/keepalived/nginx.sh
[[email protected] keepalived]# systemctl start keepalived
[[email protected] keepalived]# ifconfig -a
eno16777736: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.102  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 fe80::20c:29ff:fe87:fd0e  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:87:fd:0e  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)

6.测试:通过浏览器访问测试http://192.168.1.200/,可发现流量在web1和web2之间跳转.

6.1测试关闭主nginx节点上的keepalived服务器,发绑定的vip在主节点消失

[[email protected] keepalived]# systemctl stop keepalived.service
[[email protected] keepalived]# ifconfig -a
eno16777736: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.101  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 fe80::20c:29ff:fefe:6f3  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:fe:06:f3  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)

vip在却在备节点上出现
[[email protected] keepalived]# ifconfig -a
eno16777736: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.102  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 fe80::20c:29ff:fe87:fd0e  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:87:fd:0e  txqueuelen 1000  (Ethernet)

eno16777736:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.200  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:87:fd:0e  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)

通过浏览器访问测试http://192.168.1.200/,可发现流量依然在web1和web2之间跳转。

6.2再次启动主节点的keepalived服务,发现vip又重新漂移会主节点

[[email protected] keepalived]# systemctl start keepalived.service
[[email protected] keepalived]# ifconfig -a
eno16777736: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.101  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 fe80::20c:29ff:fefe:6f3  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:fe:06:f3  txqueuelen 1000  (Ethernet)

eno16777736:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.200  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 00:0c:29:fe:06:f3  txqueuelen 1000  (Ethernet)
通过浏览器访问测试http://192.168.1.200/,可发现流量依然在web1和web2之间跳转。

6.3关闭nginx主节点上的nginx服务,发现vip从主节点消失,keepalived服务关闭,vip在备节点上出现。

[[email protected] keepalived]# systemctl stop nginx.service
[[email protected] keepalived]# ifconfig -a
eno16777736: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.101  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 fe80::20c:29ff:fefe:6f3  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:fe:06:f3  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)

[[email protected] keepalived]# systemctl status keepalived
keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled)
   Active: inactive (dead)

通过浏览器访问测试http://192.168.1.200/,可发现流量依然在web1和web2之间跳转。

6.4再次启动主节点的nginx和keepalived服务后,VIP又漂回主节点。

[[email protected] keepalived]# systemctl start nginx.service
[[email protected] keepalived]# systemctl start keepalived
通过浏览器访问测试http://192.168.1.200/,可发现流量依然在web1和web2之间跳转。

参考:

http://www.linuxdiyf.com/linux/12955.html
http://nginx.org/en/linux_packages.html
http://blog.csdn.net/e421083458/article/details/30086413
http://my.oschina.net/u/1458120/blog/208740

时间: 2024-08-02 02:46:13

双节点(nginx+keepalived)为两台apache服务器提供负载均衡的相关文章

单节点nginx为两台apache服务器提供负载均衡

需求:本实验为单节点nginx为两台apache服务器提供负载均衡,所有配置为最简单 1.初始化3台测试server,该关的关了 [[email protected] ~]# vim /etc/hosts 192.168.1.101 ng-master 192.168.1.161 web1 192.168.1.162 web2 [[email protected] ~]# yum clean all [[email protected] ~]# systemctl stop firewalld.

单节点nginx为两台apache服务器提供负载均衡(转载)

单节点nginx为两台apache服务器提供负载均衡 需求:本实验为单节点nginx为两台apache服务器提供负载均衡,所有配置为最简单 1.初始化3台测试server,该关的关了 1 2 3 4 5 6 7 8 9 [[email protected] ~]# vim /etc/hosts 192.168.1.101   ng-master 192.168.1.161   web1 192.168.1.162   web2 [[email protected] ~]# yum clean a

EG:nginx反向代理两台web服务器,实现负载均衡 所有的web服务共享一台nfs的存储

step1: 三台web服务器环境配置:iptables -F; setenforce 0 关闭防火墙:关闭setlinux step2:三台web服务器 装软件 step3:主机修改配置文件:vim /usr/local/nginx/conf/nginx.conf 代理服务器修改文件:修改端口即可,端口可以自己设 ,不改默认的也行,但是得相应匹配 为测试需要,更改三台机器nginx的html文件: mv /usr/local/nginx/html/index.html /usr/local/n

两台web服务器实现负载均衡的解决方案

写在前面:如果此文有幸被某位朋友看见并发现有错的地方,希望批评指正.如有不明白的地方,愿可一起探讨. 总体方案 平台规划拓扑图如下: 总体解决方案: 两台web服务通过DNS实现负载均衡,共享NFS服务器,通过NFS服务器共享MySQL服务器 说明:接下来将搭建DNS服务器.两台web服务器.NFS服务器以及MySQL服务器,在此过程中只给出实现过程及其步骤,望理解. 搭建DNS服务器 1.编辑主配置文件 # vim /etc/named.conf options { directory   "

实验:基于keepalived实现两台realserver服务器中的nginx和php-fpm服务互为主从

基于keepalived实现两台realserver服务器中的nginx和php-fpm服务互为主从 思路:利用两个VIP,一个定位nginx,一个定位php-fpm 步骤: 1.准备两台基于LNMP架构的服务器(能够提供正常的web服务) 2.在nginx为主php-fpm为备的机器上操作: ①编辑keepalived的配置文件(文件内容的具体含义请参看博客: http://13150617.blog.51cto.com/13140617/1979652) vim /etc/keepalive

Nginx + Keepalived(主备模式)实现负载均衡高可用浅析

概述 目前关于负载均衡和高可用的架构方案能找到相当多且详尽的资料,此篇是自己学习相关内容的一个总结,防止将来遗忘再次重新查找资料,也避免踩相同的坑. 此次配置的负载均衡与高可用架构:Nginx + Keepalived(主备模式),Nginx 使用反向代理实现七层负载均衡. 众所周知,Nginx 是一款自由的.开源的.高性能HTTP服务器和反向代理服务器,也是一个IMAP.POP3.SMTP代理服务器. 也就是说Nginx本身就可以托管网站(类似于Tomcat一样),进行HTTP服务处理,也可以

Nginx +keepalived+varnish+lamp实现高可用、负载均衡集群

描述:1.前端两台NGINX,通过keepalived虚拟IP漂移,实现前端两台NGINX高可用:2.利用NGINX反向代理功能对后端varnish实现高可用集群, 3.再通过varnish实现动静分离 注:1.先装Nginx +keepalived2.装varnish3.装lamp需要6台虚拟机(100-101装Nginx +keepalived:100主,101备)需要联网(102-103装varnish)需要联网(104-105装lamp)需要联网 所有主机必做的步骤 systemctl

Nginx+Keepalived(双机热备)搭建高可用负载均衡环境(HA)

原文:https://my.oschina.net/xshuai/blog/917097 摘要: Nginx+Keepalived搭建高可用负载均衡环境(HA) http://blog.csdn.net/xyang81/article/details/52554398可以看更多介绍 Keepalived的介绍可以百度一堆一堆的资料.一定要看看哦. 1.基于上一篇博客总结,再次安装一个虚拟机当backup服务器,这个服务器只安装Keepalived+Nginx即可 2.Master还是上一篇博文的

linux Nginx +keepalived+varnish+lamp实现高可用、负载均衡集群

环境 需要六台服务器:两台keepalived+nginx:192.168.80.100/192.168.80.101两台varnish:192.168.80.102/192.18.80.103两台lamp:192.168.80.104/192.168.80.105 安装epel-releases 需要释放yum源yum install epel-releases 安装epel源 需要联网 安装服务 yum install keepalived -y安装keepalived服务 配置主keepa