Linux系统架构-(LB-HA集群)之LVS配置

LB集群之LVS

ha lb load balance

负载均衡软件 nginx、lvs、keepalived

设备F5、netscale

lvs有三种模式

1.NAT

2.TUN

3.DR

调度算法:rr、wrr、dh、sh

动态算法:wlc、lc、lblc、lblcr

LVS的NAT模式配置:

准备三台设备。1台为转发,其他2台为提供的服务。

为区分开:

1号机为dir,

2号机为rs1,

3号机为rs2

1号机:

[[email protected] ~]# hostname dir

[[email protected] ~]# ifconfig

eth0     inet addr:192.168.137.22

eth1      inet addr:192.168.2.22

//1号机准备两个网卡,假设eth0连接的是内网,为内网地址;eth1连接的是外网,为外网地址

2号机

[[email protected] ~]# hostname rs1

[[email protected] ~]# bash

[[email protected] ~]# ifconfig

eth0     inet addr:192.168.137.21

3号机

[[email protected] ~]# hostname rs2

[[email protected] ~]# bash

[[email protected] ~]# ifconfig

eth0     inet addr:192.168.137.23

dir上:

[email protected] ~]# bash

[[email protected] ~]# yum install -y ipvsadm

[[email protected] ~]# vim/usr/local/sbin/lvs_nat.sh        //为方便管理写一脚本,内容如下:

#! /bin/bash

# director 服务器上开启路由转发功能:

echo 1 > /proc/sys/net/ipv4/ip_forward

# 关闭icmp的重定向

echo 0 >/proc/sys/net/ipv4/conf/all/send_redirects

echo 0 >/proc/sys/net/ipv4/conf/default/send_redirects

echo 0 >/proc/sys/net/ipv4/conf/eth0/send_redirects

echo 0 >/proc/sys/net/ipv4/conf/eth1/send_redirects

# director 设置nat防火墙

iptables -t nat -F

iptables -t nat -X

iptables -t nat -A POSTROUTING -s192.168.137.0/24  -j MASQUERADE

# director设置ipvsadm

IPVSADM=‘/sbin/ipvsadm‘

$IPVSADM -C

$IPVSADM -A -t 192.168.2.22:80 -s rr

$IPVSADM -a -t 192.168.2.22:80 -r192.168.137.21:80 -m -w 1

$IPVSADM -a -t 192.168.2.22:80 -r192.168.137.23:80 -m -w 1

[[email protected] ~]# sh /usr/local/sbin/lvs_nat.sh

[[email protected] ~]# ipvsadm -l

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port          Forward Weight ActiveConn InActConn

TCP 10.203.141.18:http lc persistent 300

-> 192.168.137.21:http         Masq    1      0         0

-> 192.168.137.23:http         Masq    1      0         0

[[email protected] ~]# ipvsadm -ln

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port          Forward Weight ActiveConn InActConn

TCP 192.168.2.22:80 rr

-> 192.168.137.21:80           Masq    1      0         1

-> 192.168.137.23:80           Masq    1      0         0

//配置成功

rs1上:

[[email protected] ~]# vim/etc/sysconfig/network-scripts/ifcfg-eth0

GATEWAY=192.168.137.22                     //网关需设置成dir的IP地址

[[email protected] ~]# ifdown eth0; ifup eth0

rs2上:

[[email protected] ~]# vim/etc/sysconfig/network-scripts/ifcfg-eth0

GATEWAY=192.168.137.22

[[email protected] ~]# ifdown eth0; ifup eth0

[[email protected] ~]# service NetworkManager stop

[[email protected] ~]# chkconfig NetworkManager off

[[email protected] ~]# service network restart

rs1上:

[[email protected] ~]# /etc/init.d/nginx start

Starting nginx:                                           [  OK  ]

[[email protected] ~]# netstat -lnp |grep nginx

tcp     0    0 0.0.0.0:80         0.0.0.0:*               LISTEN      9423/nginx

[[email protected] ~]# curl localhost

master

rs2上:

[[email protected] ~]# netstat -lnp |grep nginx

tcp     0     0 0.0.0.0:80        0.0.0.0:*              LISTEN      4871/nginx

[[email protected] ~]# curl localhost

slave

dir上查看外网地址:

[[email protected] ~]# ifconfig

eth1     inet addr:192.168.2.22

window客户端多次访问192.168.2.22时,出现一次master,一次slave

[[email protected] ~]# curl 192.168.2.22

master

[[email protected] ~]# curl 192.168.2.22

slave

[[email protected] ~]# curl 192.168.2.22

master

[[email protected] ~]# curl 192.168.2.22

slave

[[email protected] ~]# vim/usr/local/sbin/lvs_nat.sh

#! /bin/bash

# director 服务器上开启路由转发功能:

echo 1 > /proc/sys/net/ipv4/ip_forward

# 关闭icmp的重定向

echo 0 > /proc/sys/net/ipv4/conf/all/send_redirects

echo 0 >/proc/sys/net/ipv4/conf/default/send_redirects

echo 0 >/proc/sys/net/ipv4/conf/eth0/send_redirects

echo 0 >/proc/sys/net/ipv4/conf/eth1/send_redirects

# director 设置nat防火墙

iptables -t nat -F

iptables -t nat -X

iptables -t nat -A POSTROUTING -s192.168.137.0/24  -j MASQUERADE

# director设置ipvsadm

IPVSADM=‘/sbin/ipvsadm‘

$IPVSADM -C

$IPVSADM -A -t 192.168.2.22:80 -s wrr

$IPVSADM -a -t 192.168.2.22:80 -r192.168.137.21:80 -m -w 2

$IPVSADM -a -t 192.168.2.22:80 -r192.168.137.23:80 -m -w 1

[[email protected] ~]#sh /usr/local/sbin/lvs_nat.sh

多次访问192.168.2.22,出现两次master,一次slave

[[email protected] ~]# curl 192.168.2.22

master

[[email protected] ~]# curl 192.168.2.22

master

[[email protected] ~]# curl 192.168.2.22

slave

[[email protected] ~]# curl 192.168.2.22

master

[[email protected] ~]# curl 192.168.2.22

master

[[email protected] ~]# curl 192.168.2.22

slave

[[email protected] ~]# curl 192.168.2.22

master

[[email protected] ~]# curl 192.168.2.22

master

[[email protected] ~]# curl 192.168.2.22

slave

LVS的DR设置

清空之前的规则

dir上:

[[email protected] ~]# ipvsadm -ln                            //查看

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port          Forward Weight ActiveConn InActConn

TCP 192.168.2.22:80 wrr

-> 192.168.137.21:80           Masq    2      0         0

-> 192.168.137.23:80           Masq    1      0         0

[[email protected] ~]# ipvsadm -C

[[email protected] ~]# ipvsadm -ln

[[email protected] ~]# iptables -t nat -F

[[email protected] ~]# ifdown eth1

[[email protected] ~]# vim /usr/local/sbin/lvs_dr.sh

#! /bin/bash

echo 1 > /proc/sys/net/ipv4/ip_forward

ipv=/sbin/ipvsadm

vip=192.168.137.100

rs1=192.168.137.21

rs2=192.168.137.23

ifconfig eth0:0 $vip broadcast $vip netmask255.255.255.255 up

route add -host $vip dev eth0:0

$ipv -C

$ipv -A -t $vip:80 -s wrr

$ipv -a -t $vip:80 -r $rs1:80 -g -w 1

$ipv -a -t $vip:80 -r $rs2:80 -g -w 1

[[email protected] ~]# sh !$

sh /usr/local/sbin/lvs_dr.sh

[[email protected] ~]# ipvsadm -ln

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port          Forward Weight ActiveConn InActConn

TCP 192.168.137.100:80 rr

-> 192.168.137.21:80           Route   1      0         0

-> 192.168.137.23:80           Route   1      0         0

rs1上:

[[email protected] ~]# vim/usr/local/sbin/lvs_dr_rs.sh

#! /bin/bash

vip=192.168.137.100

ifconfig lo:0 $vip broadcast $vip netmask255.255.255.255 up

route add -host $vip lo:0

echo"1" >/proc/sys/net/ipv4/conf/lo/arp_ignore

echo"2" >/proc/sys/net/ipv4/conf/lo/arp_announce

echo"1" >/proc/sys/net/ipv4/conf/all/arp_ignore

echo"2" >/proc/sys/net/ipv4/conf/all/arp_announce

[[email protected] ~]# sh/usr/local/sbin/lvs_dr_rs.sh

[[email protected] ~]# vim/etc/sysconfig/network-scripts/ifcfg-eth0

GATEWAY=192.168.137.1

[[email protected] ~]# service network restart

rs2上:

[[email protected] ~]# vim/usr/local/sbin/lvs_dr_rs.sh

#! /bin/bash

vip=192.168.137.100

ifconfig lo:0 $vip broadcast $vip netmask255.255.255.255 up

route add -host $vip lo:0

echo"1" >/proc/sys/net/ipv4/conf/lo/arp_ignore

echo"2" >/proc/sys/net/ipv4/conf/lo/arp_announce

echo"1" >/proc/sys/net/ipv4/conf/all/arp_ignore

echo"2" >/proc/sys/net/ipv4/conf/all/arp_announce

[[email protected] ~]# sh !$

sh /usr/local/sbin/lvs_dr_rs.sh

[[email protected] ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth0

GATEWAY=192.168.137.1

[[email protected] ~]# service network restart

客户端访问:

浏览器有些时候不准,我们用curl测试,打开第四台设备测试

一次master,一次slave

LVS结合keepalived配置

在以上配置中,如果有一台设备宕机了,还是执行rr,那么问题就来了

rs1上关闭服务(模拟宕机)

[[email protected] ~]# /etc/init.d/nginx stop

客户端测试:

[email protected]:~$ curl192.168.137.100

slave

[email protected]:~$ curl192.168.137.100

curl: (7) Failed to connect to192.168.137.100 port 80: Connection refused

[email protected]:~$ curl192.168.137.100

slave

[email protected]:~$ curl192.168.137.100

curl: (7) Failed to connect to192.168.137.100 port 80: Connection refused

[email protected]:~$ curl192.168.137.100

slave

[email protected]:~$ curl192.168.137.100

curl: (7) Failed to connect to192.168.137.100 port 80: Connection refused

[email protected]:~$

出现了访问一次成功,一次失败。因为转发到了服务停止的设备上

现可安装第三方软件解决这个问题keepalive(负载均衡与高可用与一体)

keepalive需设置主从

dir上:

[[email protected] ~]# ipvsadm -C             //清空之前的规则

[[email protected] ~]# yum install -y keepalived

为节省一点资源,从就在rs2上做了

rs2上:

[[email protected] ~]# yum install -y keepalived

dir上:

[[email protected] ~]# vim/etc/keepalived/keepalived.conf

vrrp_instance VI_1 {

state MASTER   #备用服务器上为BACKUP

interface eth0

virtual_router_id 51

priority 100  #备用服务器上为90

advert_int 1

authentication {

auth_type PASS

auth_pass 1111

}

virtual_ipaddress {

192.168.137.100

}

}

virtual_server 192.168.137.100 80 {

delay_loop 6                  #(每隔10秒查询realserver状态)

lb_algo wlc                  #(lvs算法)

lb_kind DR                 #(Direct Route)

persistence_timeout 60        #(同一IP的连接60秒内被分配到同一台realserver)

protocol TCP                #(用TCP协议检查realserver状态)

real_server 192.168.137.21 80 {

weight 100               #(权重)

TCP_CHECK {

connect_timeout 10       #(10秒无响应超时)

nb_get_retry 3

delay_before_retry 3

connect_port 80

}

}

real_server 192.168.137.23 80 {

weight 100

TCP_CHECK {

connect_timeout 10

nb_get_retry 3

delay_before_retry 3

connect_port 80

}

}

}

[[email protected]~]# scp /etc/keepalived/keepalived.conf192.168.137.23:/etc/keepalived/keepalived.conf

//配置文件拷贝到从上

rs2上:

[[email protected] ~]# vim/etc/keepalived/keepalived.conf

state BACKUP

priority 90

dir上:

[[email protected] ~]# ipvsadm -ln         //查看无规则

[[email protected] ~]# ifconfig                //存在虚拟IP

eth0:0   inet addr:192.168.137.100

[[email protected] ~]# /etc/init.d/keepalived start

rs2上也启动

[[email protected] ~]# /etc/init.d/keepalived start

dir上:

[[email protected] ~]# ipvsadm -ln

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port          Forward Weight ActiveConn InActConn

TCP 192.168.137.100:80 wlc persistent 60

-> 192.168.137.23:80           Route   100    0         0

rs1上:启动之前停掉的服务

[[email protected] ~]# /etc/init.d/nginx start

dir上再查看

[[email protected] ~]# ipvsadm -ln

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port          Forward Weight ActiveConn InActConn

TCP 192.168.137.100:80 wlc persistent 60

-> 192.168.137.21:80           Route   100    0         0

-> 192.168.137.23:80           Route   100    0         0

//只有当设备活的时候才会去转发

[[email protected] ~]# ip addr

inet 192.168.137.100/32

客户端可以访问:

现停止一台设备提供服务

rs1上停止服务

[[email protected] ~]# curl localhost

master

[[email protected] ~]# /etc/init.d/nginx stop

客户端测试:

[email protected]:~$ curl192.168.137.100

master

[email protected]:~$ curl192.168.137.100

curl: (7) Failed to connect to192.168.137.100 port 80: Connection refused

[email protected]:~$ curl192.168.137.100

slave

[email protected]:~$ curl192.168.137.100

slave

[email protected]:~$ curl192.168.137.100

slave

//较短时间内自动切换,切换到正常提供服务的设备上

dir上查看规则

[[email protected] ~]# ipvsadm -ln

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port          Forward Weight ActiveConn InActConn

TCP 192.168.137.100:80 wlc persistent 60

-> 192.168.137.23:80           Route   100    0         0

//只剩下一个了

rs1上,再启动之

[email protected] ~]# /etc/init.d/nginx start

dir上:

[[email protected] ~]# ipvsadm -ln

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port          Forward Weight ActiveConn InActConn

TCP 192.168.137.100:80 wlc persistent 60

-> 192.168.137.21:80           Route   100    0         0

-> 192.168.137.23:80           Route   100    0         0

//自动加回,变成两个了

[[email protected] ~]# ip addr

eth0:

inet 192.168.137.22/24 brd 192.168.137.255scope global eth0

inet 192.168.137.100/32 brd 192.168.137.100scope global eth0:0

//加载了虚拟IP:192.168.137.100

[[email protected] ~]# ip addr

eth0:

inet 192.168.137.23/24 brd 192.168.137.255scope global eth0

//rs2上为加载虚拟IP192.168.137.100

当主keepalived上stop后

[[email protected] ~]# /etc/init.d/keepalived stop

从上

[[email protected] ~]# ip addr

eth0:

inet 192.168.137.23/24 brd 192.168.137.255scope global eth0

inet 192.168.137.100/32 scope global eth0

//自动加载虚拟IP

时间: 2024-08-03 08:40:05

Linux系统架构-(LB-HA集群)之LVS配置的相关文章

Linux系统架构(LB-HA集群)-nginx负载均衡集群配置

nginx负载均衡集群配置 [[email protected] ~]# cd/usr/local/nginx/conf/vhosts/ [[email protected] vhosts]# ls default.conf  test.conf [[email protected] vhosts]# vim lb.conf upstream wang { server 192.168.137.21:80; server 192.168.137.23:80; } server { listen

Linux系统Oracle 12cR2 RAC集群安装与维护管理(12.2)专题

风哥Linux系统Oracle 12cR2 RAC集群安装与维护管理(12.2)专题包括内容: Oracle数据库12cR2(项目实战之一):在Windows上安装Oracle12.2 Oracle数据库12cR2(项目实战之五):Oracle12.2 RAC集群实施与维护 Oracle数据库12cR2(项目实战之六):Oracle12.2 RAC集群管理之增删节点 Oracle数据库12cR2(项目实战之七):Oracle12.2 RAC集群管理之修改IP地址 视频学习地址:http://ed

Linux系统架构(LB—HA集群)-HA集群配置

HA集群配置 准备两台设备,分别为主从 主: [[email protected] ~]# ifconfig eth0       inet addr:192.168.137.21 从: [[email protected] ~]# ifconfig eth0      inet addr:192.168.137.23 先开始配置 主上: [[email protected] ~]# hostname master [[email protected] ~]# iptables -F [[ema

Linux系统Oracle12.2 RAC集群实施维护_Oracle数据库12cR2(项目实战之五)

课程目标 风哥Oracle数据库教程12cR2(项目实战系列)之五:Linux系统上Oracle 12.2 RAC实施与基本维护,包括系统安装与环境配置.ASM存储配置.Oracle 12cR2 RAC集群软件安装.RAC数据库软件安装.RAC数据库创建.RAC集群日常维护.RAC集群测试.RAC归档配置 适用人群 IT技术人员,IT初级工程师,系统管理员,网络管理员,主机工程师,数据库工程师 课程简介 课程介绍 Linux系统Oracle RAC 12c R2数据库安装(一体机集群项目)_风哥

一个简单的http HA集群 keepalived实例配置

以下是一个keepalived的案例:master :192.168.200.11  运行服务httpd slave :192.168.200.12 运行服务httpd 虚拟ip :192.168.200.16 global_defs {    notification_email {      [email protected]      [email protected]      [email protected]    }    notification_email_from [emai

架构之高可用性(HA)集群(Keepalived)

Keepalived简介 Keepalived是Linux下一个轻量级别的高可用解决方案.高可用(High Avalilability,HA),其实两种不同的含义:广义来讲,是指整个系统的高可用行,狭义的来讲就是之主机的冗余和接管, 它与HeartBeat RoseHA 实现相同类似的功能,都可以实现服务或者网络的高可用,但是又有差别,HeartBeat是一个专业的.功能完善的高可用软件,它提供了HA 软件所需的基本功能,比如:心跳检测.资源接管,检测集群中的服务,在集群节点转移共享IP地址的所

Linux HA集群

HA(High Availability,高可用)集群的主要目的是提高服务的在线率,也就是缩短平均无故障的时间.实际上就是当一个提供服务的服务节点不在线时,有另一个提供相同服务的节点可以继续提供服务,避免出现单点故障. HA集群相关概念 一.相关名词解释 1.在线率 在线率是衡量HA集群的一个重要指标,就算方式如下: 在线率=平均无故障时间/(平均修复时间 + 平均无故障时间) 所以提高系统的可用性: 1).增加平均无故障时间 2).缩短平均修复时间 (可以通过冗余机制实现) 2.资源 这里的资

Linux系统架构

Linux系统架构 A. HA集群配置 1. 安装heartbeat [[email protected] ~]# vim /etc/hosts   //配置hosts 10.30.4.146  master 10.30.4.140  slave [[email protected] ~]# rpm -ivh http://www.lishiming.net/data/attachment/forum/epel-release-6-8_64.noarch.rpm   //安装epel [[ema

Linux集群服务 LVS

linux虚拟服务器(LVS)项目在linux操作系统上提供了最常见的负载均衡软件. 集群定义: 集群(cluster)技术是一种较新的技术,通过集群技术,可以在付出较低成本的情况下获得在性能.可靠性.灵活性方面的相对较高的收益,其任务调度则是集群系统中 的核心技术.本文就集群系统的定义.发展趋势.任务调度等问题进行了简要论述.集群是一组相互独立的.通过高速网络互联的计算机,它们构成了一个组,并以单一系统的模式加以管理.一个客户与集群相互作用时,集群像是一个独立的服务器.集群配置是用于提高可用性