LVS+Keepalived+Nginx+Tomcat高可用负载均衡集群配置(DR模式,一个VIP,多个端口)

一.概述

LVS作用:实现负载均衡

Keepalived作用:监控集群系统中各个服务节点的状态,HA cluster。

配置LVS有两种方式:

1. 通过ipvsadm命令行方式配置

2. 通过Redhat提供的工具piranha来配置LVS

软件下载:

ipvsadm下载地址:

http://www.linuxvirtualserver.org/software/kernel-2.6/

Keepalived下载地址:

http://www.keepalived.org/software/

安装包版本:

ipvsadm-1.24.tar.gz

keepalived-1.2.2.tar.gz

操作系统版本:

Red Hat Enterprise Linux Server release 6.4 (Santiago)

二.服务器IP规划


服务器


IP地址


网关


虚拟设备名


虚拟IP(VIP)


部署应用


Master Director Server


10.50.13.34


10.50.13.1


eth0:0


10.50.13.11


LVS+keepalived


Backup Director Server


10.50.13.35


10.50.13.1


eth0:0


10.50.13.11


LVS+keepalived


Real server1


10.50.13.36


10.50.13.1


lo:0


10.50.13.11


Nginx+tomcat


Real server2u


10.50.13.37


10.50.13.1


lo:0


10.50.13.11


Nginx+tomcat

三.LVS安装配置

1.通过ipvsadm软件配置LVS,在Director Serve上安装IPVS管理软件

通过如下命令检查kernel是否已经支持LVS的ipvs模块:

$ modprobe -l |grep ipvs
kernel/net/netfilter/ipvs/ip_vs.ko
kernel/net/netfilter/ipvs/ip_vs_rr.ko
kernel/net/netfilter/ipvs/ip_vs_wrr.ko
kernel/net/netfilter/ipvs/ip_vs_lc.ko
kernel/net/netfilter/ipvs/ip_vs_wlc.ko
kernel/net/netfilter/ipvs/ip_vs_lblc.ko
kernel/net/netfilter/ipvs/ip_vs_lblcr.ko
kernel/net/netfilter/ipvs/ip_vs_dh.ko
kernel/net/netfilter/ipvs/ip_vs_sh.ko
kernel/net/netfilter/ipvs/ip_vs_sed.ko
kernel/net/netfilter/ipvs/ip_vs_nq.ko
kernel/net/netfilter/ipvs/ip_vs_ftp.ko

如果有类似上面的输出,表明系统内核已经默认支持了IPVS模块。接着就可以安装IPVS管理软件了。

2. 安装ipvsadm

$ tar zxf ipvsadm-1.24.tar.gz
$ cd ipvsadm-1.24
$ sudo make
$ sudo make install

make时可能会遇到报错,由于编译程序找不到对应内核的原因,执行以下命令后继续编译

$ ln -s /usr/src/kernels/2.6.18-164.el5-x86_64/ /usr/src/linux

3. ipvsadm配置

为了管理和配置的方便,将ipvsadm配置写成启停脚本:

$ cat lvsdr
#!/bin/bash
# description: Start LVS of Director server
VIP=10.50.13.11
RIP=`echo 10.50.13.{36..37}`
. /etc/rc.d/init.d/functions
case "$1" in
start)
echo " start LVS of Director Server"
# set the Virtual IP Address and sysctl parameter
/sbin/ifconfig eth0:0 ${VIP} broadcast ${VIP} netmask 255.255.255.255 up
/sbin/route add -host ${VIP} dev eth0:0
#echo "1" >/proc/sys/net/ipv4/ip_forward
#Clear IPVS table
/sbin/ipvsadm -C
#set LVS
/sbin/ipvsadm -A -t ${VIP}:80 -s rr -p 600
/sbin/ipvsadm -A -t ${VIP}:90 -s rr -p 600
for rip in ${RIP};do
/sbin/ipvsadm -a -t ${VIP}:80 -r ${rip}:80 -g
/sbin/ipvsadm -a -t ${VIP}:90 -r ${rip}:90 -g
done
#Run LVS
/sbin/ipvsadm
;;
stop)
echo "close LVS Directorserver"
#echo "0" >/proc/sys/net/ipv4/ip_forward
/sbin/ipvsadm -C
/sbin/ifconfig eth0:0 down
;;
*)
echo "Usage: $0 {start|stop}"
exit 1
esac

然后把文件放到/etc/init.d下,执行:

$ chomd +x /etc/init.d/lvsdr

$ service lvsdr start

4. Real server 配置

在lvs的DR和TUN模式下,用户的访问请求到达真实服务器后,是直接返回给用户的,而不再经过前端的Director Server,因此,就需要在每个Real server节点上增加虚拟的VIP地址,这样数据才能直接返回给用户,增加VIP地址的操作可以通过创建脚本的方式来实现,创建文件/etc /init.d/lvsrs,脚本内容如下:

$ cat /etc/init.d/lvsrs
#!/bin/bash
VIP=10.50.13.11
/sbin/ifconfig lo:0 ${VIP} broadcast ${VIP} netmask 255.255.255.255 up
/sbin/route add -host ${VIP} dev lo:0
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
sysctl -p

四.Keepalived安装配置

以下是通过具有sudo权限的普通用户安装的。

tar zxf keepalived-1.2.2.tar.gz
cd keepalived-1.2.2
./configure --sysconf=/etc --with-kernel-dir=/usr/src/kernels/2.6.32-358.el6.x86_64
sudo make
sudo make install
sudo cp /usr/local/sbin/keepalived /sbin/keepalived

配置keepalived

Keepavlived的配置文件在/etc/keepalived/目录下,编辑后的keepavlived的MASTER配置文件如下:

$ cat /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
      notification_email {
      [email protected]
      }
      notification_email_from [email protected]
      smtp_server 10.50.13.34
      smtp_connect_timeout 30
      router_id LVS_MASTER
}

vrrp_instance VI_1 {
  state MASTER
  interface eth0
  virtual_router_id 51
  priority 100
  advert_int 1
  authentication {
    auth_type PASS
    auth_pass 1111
  }
  virtual_ipaddress {
    10.50.13.11
  }
  notify_master "/etc/init.d/lvsdr start"
  notify_backup "/etc/init.d/lvsdr stop"
}

virtual_server 10.50.13.11 80 {
  delay_loop 2
  lb_algo rr
  lb_kind DR
  #nat_mask 255.255.255.0
  persistence_timeout 50
  protocol TCP

  real_server 10.50.13.36 80 {
  weight 1
  TCP_CHECK {
    connect_timeout 3
    nb_get_retry 3
    delay_before_retry 3
    }
  }
  real_server 10.50.13.37 80 {
  weight 1
  TCP_CHECK {
    connect_timeout 3
    nb_get_retry 3
    delay_before_retry 3
        }
  }

virtual_server 10.50.13.11 90 {
  delay_loop 2
  lb_algo rr
  lb_kind DR
  #nat_mask 255.255.255.0
  persistence_timeout 50
  protocol TCP

  real_server 10.50.13.36 90 {
  weight 1
  TCP_CHECK {
    connect_timeout 3
    nb_get_retry 3
    delay_before_retry 3
    }
  }
  real_server 10.50.13.37 90 {
  weight 1
  TCP_CHECK {
    connect_timeout 3
    nb_get_retry 3
    delay_before_retry 3
    }
  }
}

BACKUP配置文件如下:

$ cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {

  notification_email {
  [email protected]
  }
  notification_email_from [email protected]
  smtp_server 10.50.13.35
  smtp_connect_timeout 30
  router_id LVS_BACKUP
}

vrrp_instance VI_1 {
  state BACKUP
  interface eth0
  virtual_router_id 51
  priority 90
  advert_int 1
  authentication {
  auth_type PASS
  auth_pass 1111
      }
virtual_ipaddress {
  10.50.13.11
  }
  notify_master "/etc/init.d/lvsdr start"
  notify_backup "/etc/init.d/lvsdr stop"
}

virtual_server 10.50.13.11 80 {
  delay_loop 2
  lb_algo rr
  lb_kind DR
  #nat_mask 255.255.255.0
  persistence_timeout 50
  protocol TCP

  real_server 10.50.13.36 80 {
  weight 1
  TCP_CHECK {
    connect_timeout 3
    nb_get_retry 3
    delay_before_retry 3
  }
}
real_server 10.50.13.37 80 {
  weight 1
  TCP_CHECK {
    connect_timeout 3
    nb_get_retry 3
    delay_before_retry 3
  }
}

virtual_server 10.50.13.11 80 {
  delay_loop 2
  lb_algo rr
  lb_kind DR
  #nat_mask 255.255.255.0
  persistence_timeout 50
  protocol TCP

  real_server 10.50.13.36 90 {
  weight 1
  TCP_CHECK {
    connect_timeout 3
    nb_get_retry 3
    delay_before_retry 3
  }

}
  real_server 10.50.13.37 90 {
  weight 1
  TCP_CHECK {
    connect_timeout 3
    nb_get_retry 3
    delay_before_retry 3
  }
  }
}

keepalived启动方法:

/etc/init.d/keepalived start

五.nginx、tomcat安装配置

1.安装nginx

$ tar zxf nginx-1.6.1.tar.gz
$ cd nginx-1.6.1
$ ./configure --prefix=/app/nginx --with-http_ssl_module --with-http_realip_module --add-module=/app/soft/nginx-sticky
$ make
$ make install

2.nginx配置:

upstream test_80 {
  sticky;
  server 10.50.13.36:8080;
  server 10.50.13.37:8080;
}
upstream test_90 {
  sticky;
  server 10.50.13.36:8090;
  server 10.50.13.37:8090;
}

server {
      listen 80;
  server_name localhost;
  .....
  location ^~ /test/ {
  proxy_pass http://test_80;
  proxy_set_header Host $host;
  proxy_set_header X-Real-IP $remote_addr;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

server {
      listen 90;
  server_name localhost;
  .....
  location ^~ /test/ {
  proxy_pass http://test_80;
  proxy_set_header Host $host;
  proxy_set_header X-Real-IP $remote_addr;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

3.安装tomcat(简单,过程略)

10.50.13.36/37每台机器各安装两个tomcat,端口分别启用8080和8090

安装路径:/app/tomcat、/app/tomcat2

4.给tomcat准备测试文件

10.50.13.36:

$ mkdir /app/tomcat/webapps/test
$ cd /app/tomcat/webapps/test
$ vi index.jsp #添加如下内容

<%@ page language="java" %>
<html>
  <head><title>10.50.13.36_tomcat1</title></head>
  <body>
    <h1><font color="blue">10.50.13.36_tomcat1 </font></h1>
    <table align="centre" border="1">
      <tr>
        <td>Session ID</td>
    <% session.setAttribute("abc","abc"); %>
        <td><%= session.getId() %></td>
      </tr>
      <tr>
        <td>Created on</td>
        <td><%= session.getCreationTime() %></td>
      </tr>
    </table>
  </body>
</html>

$ mkdir /app/tomcat/webapps/test
$ cd /app/tomcat/webapps/test
$ vi index.jsp #添加如下内容

<%@ page language="java" %>
<html>
  <head><title>10.50.13.36_tomcat2</title></head>
  <body>
    <h1><font color="blue">10.50.13.36_tomcat2 </font></h1>
    <table align="centre" border="1">
      <tr>
        <td>Session ID</td>
    <% session.setAttribute("abc","abc"); %>
        <td><%= session.getId() %></td>
      </tr>
      <tr>
        <td>Created on</td>
        <td><%= session.getCreationTime() %></td>
      </tr>
    </table>
  </body>
</html>

10.50.13.37:

$ mkdir /app/tomcat/webapps/test
$ cd /app/tomcat/webapps/test
$ vi index.jsp #添加如下内容

<%@ page language="java" %>
<html>
  <head><title>10.50.13.37_tomcat1</title></head>
  <body>
    <h1><font color="blue">10.50.13.37_tomcat1 </font></h1>
    <table align="centre" border="1">
      <tr>
        <td>Session ID</td>
    <% session.setAttribute("abc","abc"); %>
        <td><%= session.getId() %></td>
      </tr>
      <tr>
        <td>Created on</td>
        <td><%= session.getCreationTime() %></td>
      </tr>
    </table>
  </body>
</html>

$ mkdir /app/tomcat/webapps/test
$ cd /app/tomcat/webapps/test
$ vi index.jsp #添加如下内容

<%@ page language="java" %>
<html>
  <head><title>10.50.13.37_tomcat2</title></head>
  <body>
    <h1><font color="blue">10.50.13.37_tomcat7 </font></h1>
    <table align="centre" border="1">
      <tr>
        <td>Session ID</td>
    <% session.setAttribute("abc","abc"); %>
        <td><%= session.getId() %></td>
      </tr>
      <tr>
        <td>Created on</td>
        <td><%= session.getCreationTime() %></td>
      </tr>
    </table>
  </body>
</html>

六.测试

通过浏览器访问http://10.50.13.11/test/index.jsp

查看LVS统计信息:

通过curl访问http://10.50.13.11/test/index.jsp

查看LVS统计信息:

通过浏览器访问http://10.50.13.11:90/test/index.jsp

查看LVS统计信息:可以看到10.50.13.11:dnsix有变化了。

keepalived测试,停掉master上的keepalived服务,看看VIP是否会漂移到BACKUP上面

通过以上测试发现,BACKUP已正常接管服务了。

时间: 2024-10-24 01:36:13

LVS+Keepalived+Nginx+Tomcat高可用负载均衡集群配置(DR模式,一个VIP,多个端口)的相关文章

LVS+Keepalived搭建MyCAT高可用负载均衡集群

LVS+Keepalived 介绍 LVS LVS是Linux Virtual Server的简写,意即Linux虚拟服务器,是一个虚拟的服务器集群系统.本项目在1998年5月由章文嵩博士成立,是中国国内最早出现的自由软件项目之一.目前有三种IP负载均衡技术(VS/NAT.VS/TUN和VS/DR),十种调度算法(rrr|wrr|lc|wlc|lblc|lblcr|dh|sh|sed|nq). Keepalvied Keepalived在这里主要用作RealServer的健康状态检查以及Mast

Keepalived+Nginx实现高可用负载均衡集群

一 环境介绍 1.操作系统CentOS Linux release 7.2.1511 (Core) 2.服务keepalived+lvs双主高可用负载均衡集群及LAMP应用keepalived-1.2.13-7.el7.x86_64nginx-1.10.2-1.el7.x86_64httpd-2.4.6-45.el7.centos.x86_64 二 原理及拓扑图 1.vrrp协议在现实的网络环境中,两台需要通信的主机大多数情况下并没有直接的物理连接.对于这样的情况,它们之间路由怎样选择?主机如何

LVS+keepalived 实现web高可用负载均衡集群

1.实验环境 4台CentOS-7.5虚拟机web-1:192.168.18.103web-2:192.168.18.104keepalived-1(LVS-DR模式):192.168.18.107 keepalived-2(LVS-DR模式):192.168.18.108 vip:192.168.18.110其中keepalived和lvs安装在同一台机器,web单独一台安装客户端为:个人win10笔记本(192.168.18.102)4台虚机防火墙和selinux均已关闭 2.安装keepa

Haproxy+Keepalived搭建Weblogic高可用负载均衡集群

配置环境说明:   KVM虚拟机配置 用途 数量 IP地址 机器名 虚拟IP地址 硬件 内存3G  系统盘20G cpu 4核 Haproxykeepalived 2台 192.168.1.10192.168.1.20 Haserver01Haserver02 192.168.30192.168.40 内存6G  系统盘20G cpu 4核 Weblogic 4台 192.168.1.100192.168.1.150 192.168.1.200 192.168.1.250 Clusters01C

lvs + keepalived + nginx + tomcat高可用负载反向代理服务器配置(一) 简介

一. 为什么这样构架 1. 系统高可用性 2. 系统可扩展性 3. 负载均衡能力 LVS+keepalived能很好的实现以上的要求,LVS提供负载均衡,keepalived提供健康检查,故障转移,提高系统的可用性!采用这样的架构以后 很容易对现有系统进行扩展,只要在后端添加或者减少realserver,只要更改lvs的 配置文件,并能实现无缝配置变更!nginx提供反向代理,而tomcat则提供web服务. 二.LVS LVS是Linux Virtual Server的简写,意即Linux虚拟

lvs + keepalived + nginx + tomcat高可用负载反向代理服务器配置(二) LVS

一.安装lvs sudo apt-get install ipvsadm 二.安装keepalived sudo apt-get install keepalived 三.创建keepalived.conf文件 sudo gedit /etc/keepalived/keepalived.conf 四.配置keepalived.conf # Global Configuration global_defs { lvs_id director1 } # VRRP Configuration vrrp

Keepalived+LVS(dr)高可用负载均衡集群的实现

一 环境介绍 1.操作系统CentOS Linux release 7.2.1511 (Core) 2.服务keepalived+lvs双主高可用负载均衡集群及LAMP应用keepalived-1.2.13-7.el7.x86_64ipvsadm-1.27-7.el7.x86_64httpd-2.4.6-45.el7.centos.x86_64mariadb-5.5.52-1.el7.x86_64php-5.4.16-42.el7.x86_64 二 原理及拓扑图 1.vrrp协议vrrp(Vir

使用Nginx1.9.9+Keepalived1.2.x搭建高可用负载均衡集群

一 简介以及原理介绍 (1)Nginx概念介绍: Nginx是一款轻量级的Web 服务器/反向代理服务器及电子邮件(IMAP/POP3)代理服务器,并在一个BSD-like 协议下发行.由俄罗斯的程序设计师Igor Sysoev所开发.其特点是占有内存少,并发能力强,事实上nginx的并发能力确实在同类型的网页服务器中表现较好,中国大陆使用nginx网站用户有:百度.京东.新浪.网易.腾讯.淘宝等 (2)Keepalived概念介绍: Keepalived的作用是检测服务器的状态,如果有一台we

nginx+keepalived的高可用负载均衡集群构建

实验架构图: 实验环境 Nginx和Keepalived原理介绍 参考博客:http://467754239.blog.51cto.com/4878013/1541421 1.nginx Nginx进程基于于Master+Slave(worker)多进程模型,自身具有非常稳定的子进程管理功能.在Master进程分配模式下,Master进程永远不进行业务处理,只是进行任务分发, 从而达到Master进程的存活高可靠性,Slave(worker)进程所有的业务信号都 由主进程发出,Slave(wor