LVS(DR)+keepalived+nfs+raid+LVM

LVS理论篇

1、Client 向目标VIP 发出请求,Director(负载均衡器)接收。此时IP 包头及数据帧信息为:

 

 

 

2、Director 根据负载均衡算法选择RealServer_1,不修改也不封装IP 报文,而是将数据帧的MAC 地址改为RealServer_1 的MAC 地址,然后在局域网上发送。IP 包头及数据帧头信息如下:

 

3、RealServer_1 收到这个帧,解封装后发现目标IP 与本机匹配(RealServer 事先绑定了VIP,必须的!)于是处理这个报文,随后重新封装报文,发送到局域网。此时IP 包头及数据帧头信息如下:

 

4、Client 将收到回复报文。Client 认为得到正常的服务,而不会知道是哪台服务器处理的(注意,如果跨网段,那么报文通过路由器经由Internet 返回给用户)

 

LVS-DR 中的ARP 问题

 

 

在LVS-DR 负载均衡群集中,负载均衡器与节点服务器都要配置相同的VIP 地址,在局

域网中具有相同的IP 地址,势必会造成各种服务器ARP 通信紊乱。当一个ARP 广播发送到

LVS-DR 群集时,因为负载均衡器和节点服务器都是连接到相同的网络上的,它们都会接收

到ARP 广播,这个时候,应该只有前段的负载均衡器进行响应,其他节点服务器不应该响

应ARP 广播。

1、对节点服务器进行处理,使其不影响针对VIP 的ARP 请求。

使用虚拟接口lo:0 承载VIP 地址

设置内核参数arp_ignore=1,系统只响应目的IP 为本地IP 的ARP 请求

2、RealServer 返回报文(源IP 是VIP)经由路由器转发,在重新封装报文时,需要现货区路由器的MAC 地址,发送ARP 请求保重的源IP 地址,而不使用发送接口(如eth0)的IP地址

 

3、路由器收到ARP 请求后,将跟新ARP 表项,原有的VIP 对应的Director 的MAC 地址将会被跟新为VIP 对应的RealServer 的MAC 地址。

 

此时新来的报文请求,路由器根据ARP 表项,会将该报文转发给RealServer,从而导致

Director 的VIP 失效!

 

 

15

云计算

3、路由器收到ARP 请求后,将跟新ARP 表项,原有的VIP 对应的Director 的MAC 地址将会被跟新为VIP 对应的RealServer 的MAC 地址。此时新来的报文请求,路由器根据ARP 表项,会将该报文转发给RealServer,从而导致Director 的VIP 失效!

解决方法:

对节点服务器进行处理,设置内核参数arp_announce=2,系统不使用IP 包的源地址来

设置ARP 请求的源地址,而选择发送接口的IP 地址。解决ARP 的两个问题的设置方法:修改/etc/sysctl.conf 文件

net.ipv4.conf.lo.arp_ignore = 1

net.ipv4.conf.lo.arp_announce = 2

net.ipv4.conf.all.arp_ignore = 1

net.ipv4.conf.all.arp_announce = 2

 

 

LVS+DR实战篇:

实验环境:

VIP:  192.168.18.41

BL:   192.168.18.31

web1:  192.168.18.32

web2:  192.168.18.33

nfs:    192.168.18.34

 

NFS共享存储:

[[email protected] ~]# mount /dev/cdrom /media/cdrom/

mount: block device /dev/sr0 is write-protected, mounting read-only

[[email protected] ~]# yum -y install  nfs-utils rpcbind

[[email protected] ~]# rpm -q nfs-utils rpcbind

[[email protected] ~]# mkdir /www

[[email protected] ~]# vi /etc/exports

/www    192.168.18.0/24(ro,sync,no_root_squash)

[[email protected] ~]# service rpcbind start

正在启动 rpcbind:                                         [确定]

[[email protected] ~]# service nfs start

启动 NFS 服务:                                            [确定]

启动 NFS mountd:                                          [确定]

启动 NFS 守护进程:                                        [确定]

正在启动 RPC idmapd:                                      [确定]

[[email protected] ~]# showmount -e 192.168.18.34

Export list for 192.168.18.34:

/www 192.168.18.0/24

[[email protected] ~]# chkconfig rpcbind on

[[email protected] ~]# chkconfig nfs on

[[email protected] ~]# echo "<h1>ce shi ye</h1>">/www/index.html

 

web服务器配置:

web1

[[email protected] ~]# rpm -q httpd

httpd-2.2.15-29.el6.centos.x86_64

[[email protected] ~]# vi /etc/httpd/conf/httpd.conf

[[email protected] ~]# yum -y install nfs-utils

[[email protected] ~]# service httpd start

正在启动 httpd:                                           [确定]

[[email protected] ~]# mount 192.168.18.34:/www /var/www/html/

[[email protected] ~]# df -h

Filesystem                  Size  Used Avail Use% Mounted on

/dev/mapper/vg_doo-lv_root   18G  3.9G   13G  24% /

tmpfs                       383M     0  383M   0% /dev/shm

/dev/sda1                   485M   35M  426M   8% /boot

192.168.18.34:/www           18G  1.3G   16G   8% /var/www/html

[[email protected] ~]# vi /etc/fstab

192.168.18.34:/www      /var/www/html   nfs     defaults,_netdev        1       2

(备份,第二检测)

 

web2:(同web1)

3LVS-NAT 部署

ipvsadm 工具参数说明:

-A 添加虚拟服务器

-D 删除虚拟服务器

-C 删除所有配置条目

-E 修改虚拟服务器

-L -l,列表查看

-n 不做解析,以数字形式显示

-c 输出当前IPVS 连接

-a 添加真实服务器

-d 删除某个节点

-t 指定VIP 地址及TCP 端口

-s 指定负载调度算法,rr|wrr|lc|wlc|lblc|lblcr|dh|sh|sed|nq,默认wlc

-m NAT 群集模式

-g DR 模式

-i TUN 模式

-w 设置权重(权重为0 时表示暂停节点)

--help 帮助

 

BL:

[[email protected] ~]# modprobe ip_vs//加载ip_vs模块

[[email protected] ~]# yum -y install ipvsadm

[[email protected] ~]# service ipvsadm stop

ipvsadm: Clearing the current IPVS table:                  [确定]

ipvsadm: Unloading modules:                                [确定]

[[email protected] ~]# ipvsadm -C

[[email protected] ~]# vi /opt/vip.sh

#!/bin/bash

# VIP

VIP="192.168.18.41"

/sbin/ifconfig eth1:vip $VIP broadcast $VIP netmask 255.255.255.255

/sbin/route add -host $VIP dev eth1:vip

~                                                 

 

[[email protected] ~]# chmod +x /opt/vip.sh

[[email protected] ~]# /opt/vip.sh

[[email protected] ~]# route -n

Kernel IP routing table

Destination     Gateway         Genmask         Flags Metric Ref    Use Iface

192.168.18.41   0.0.0.0         255.255.255.255 UH    0      0        0 eth1

192.168.18.0    0.0.0.0         255.255.255.0   U     0      0        0 eth1

169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth1

0.0.0.0         192.168.18.2    0.0.0.0         UG    0      0        0 eth1

[[email protected] ~]# ipvsadm -A -t 192.168.18.41:80 -s rr

[[email protected] ~]# ipvsadm -a -t 192.168.18.41:80 -r 192.168.18.32:80 -g

[[email protected] ~]# ipvsadm -a -t 192.168.18.41:80 -r 192.168.18.33:80 -g

[[email protected] ~]# ipvsadm -ln

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn

TCP  192.168.18.41:80 rr

  -> 192.168.18.32:80             Route   1      0          0         

  -> 192.168.18.33:80             Route   1      0          0    

[[email protected] ~]# ipvsadm-save >/etc/sysconfig/ipvsadm

 

 

对web1:

[[email protected] ~]# vi /opt/lvs-dr

#!/bin/bash

# lvs-dr

VIP="192.168.18.41"

/sbin/ifconfig lo:vip $VIP broadcast $VIP netmask 255.255.255.255

/sbin/route add -host $VIP dev lo:vip

echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore

echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce

echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore

echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce

 

[[email protected] ~]# chmod +x /opt/lvs-dr

[[email protected] ~]# /opt/lvs-dr

[[email protected] ~]# ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

    inet 192.168.2.41/32 brd 192.168.2.41 scope global lo:vip

 

[[email protected] ~]# scp /opt/lvs-dr 192.168.2.4:/opt

对web2也一样操作

 

对BL 可以看到分配的SYNC请求:

[[email protected] ~]# ipvsadm -lnc

IPVS connection entries

pro expire state       source             virtual            destination

TCP 14:59  ESTABLISHED 192.168.18.140:53385 192.168.18.41:80   192.168.18.33:80

TCP 01:24  FIN_WAIT    192.168.18.140:53380 192.168.18.41:80   192.168.18.32:80

TCP 01:58  FIN_WAIT    192.168.18.140:53387 192.168.18.41:80   192.168.18.33:80

TCP 01:24  FIN_WAIT    192.168.18.140:53379 192.168.18.41:80   192.168.18.33:80

TCP 01:25  FIN_WAIT    192.168.18.140:53382 192.168.18.41:80   192.168.18.32:80

 

加上keepalived:

对BL:

实验环境:

VIP:  192.168.18.41

BL1:193.168.18.30(从)

BL2:  192.168.18.31(主)

web1:  192.168.18.32

web2:  192.168.18.33

nfs:    192.168.18.34 (raid+LVM)

[[email protected] ~]# yum -y install keepalived

[[email protected] ~]# cd /etc/keepalived/

[[email protected] keepalived]# cp keepalived.conf keepalived.conf.bak

[[email protected] keepalived]# vi keepalived.conf

! Configuration File for keepalived

 

global_defs {

   notification_email {

[email protected]//自己的警报邮箱

   }

   notification_email_from [email protected]//发送邮箱

   smtp_server ping.com.cn//邮件服务器的SMTP地址

   smtp_connect_timeout 30//smtp服务器超时时间

   router_id LVS_DEVEL_BLM//邮件标题识别,可乱写

}

 

vrrp_instance VI_1 {

    state MASTER

    interface eth1

    virtual_router_id 51//同一个虚拟VIP的路由标记,同一个keepalived下主备必须一样

    priority 100//优先级,数字越大优先级越高,1-254

    advert_int 2//MASTER与SLAVE之间的同步检查时间

    authentication {

        auth_type PASS//验证类型,有PASS和AH两种

        auth_pass 1111//同一个keepalived下必须密码一样

    }

    virtual_ipaddress {

192.168.18.41//VIP,可以设置多个

    }

}

 

virtual_server 192.168.18.41 80 {//设置VIP

    delay_loop 2//健康检查时间

    lb_algo rr//轮询算法

    lb_kind DR//负载均衡机制NAT ,DR,TUN

!    nat_mask 255.255.255.0//非NAT,要注释,下同

!    persistence_timeout 300//存留超时时间

    protocol TCP

 

    real_server 192.168.18.32 80 {

        weight 1//权重值

TCP_CHECK {//readserver的状态检测部分

            connect_timeout 10//10秒无响应超时

            nb_get_retry 3//重试次数

            delay_before_retry 3//两个重试时间间隔为3秒

    connect_port 80//检测连接端口

}

    }

    real_server 192.168.18.33 80 {

        weight 1

        TCP_CHECK {

            connect_timeout 10

            nb_get_retry 3

            delay_before_retry 3

            connect_port 80

        }

    }

 

}

 

[[email protected] keepalived]# service keepalived start

正在启动 keepalived:                                      [确定]

[[email protected] keepalived]# ipvsadm -ln

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn

TCP  192.168.18.41:80 rr

  -> 192.168.18.32:80             Route   1      0          0         

  -> 192.168.18.33:80             Route   1      0          0      

对从BL2:192.168.18.30

[[email protected] ~]# modprobe ip_vs

[[email protected] ~]# yum -y install keepalived ipvsadm

[[email protected] ~]# cd /etc/keepalived/

[[email protected] keepalived]# cp keepalived.conf keepalived.conf.bak

[[email protected] keepalived]# vi keepalived.conf.bak

! Configuration File for keepalived

 

global_defs {

   notification_email {

[email protected]

   }

   notification_email_from [email protected]

   smtp_server ping.com.cn

   smtp_connect_timeout 30

   router_id LVS_DEVEL_BLM

}

 

vrrp_instance VI_1 {

    state BACKUP

    interface eth1

    virtual_router_id 51

    priority 99

    advert_int 2

    authentication {

        auth_type PASS

        auth_pass 1111

    }

    virtual_ipaddress {

192.168.18.41

    }

}

 

virtual_server 192.168.18.41 80 {

    delay_loop 2

    lb_algo rr

    lb_kind DR

!    nat_mask 255.255.255.0

!    persistence_timeout 300

    protocol TCP

 

    real_server 192.168.18.32 80 {

        weight 1

TCP_CHECK {

            connect_timeout 10

            nb_get_retry 3

            delay_before_retry 3

    connect_port 80

}

    }

    real_server 192.168.18.33 80 {

        weight 1

        TCP_CHECK {

            connect_timeout 10

            nb_get_retry 3

            delay_before_retry 3

            connect_port 80

        }

    }

 

}

 

[[email protected] keepalived]# ip a//没有VIP,就对了

[[email protected] keepalived]# ipvsadm -ln

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn

TCP  192.168.18.41:80 rr

  -> 192.168.18.32:80             Route   1      0          0         

  -> 192.168.18.33:80             Route   1      0          0

 

 

对Web1:

[[email protected] ~]# service httpd stop

停止 httpd:                                               [确定]

 

对BL:

[[email protected] keepalived]# ipvsadm -ln

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn

TCP  192.168.18.41:80 rr

  -> 192.168.18.33:80             Route   1      0          0      

 

对web1:

[[email protected] ~]# service httpd start

正在启动 httpd:                                           [确定]

 

对BL:

[[email protected] keepalived]# ipvsadm -ln

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn

TCP  192.168.18.41:80 rr

  -> 192.168.18.32:80             Route   1      0          0         

  -> 192.168.18.33:80             Route   1      0          0         

 

 

高可用测试:

对主BL:

[[email protected] keepalived]# ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

    link/ether 00:0c:29:9d:cf:f3 brd ff:ff:ff:ff:ff:ff

    inet 192.168.18.31/24 brd 192.168.18.255 scope global eth1

    inet 192.168.18.41/32 scope global eth1

    inet6 fe80::20c:29ff:fe9d:cff3/64 scope link

       valid_lft forever preferred_lft forever

 

[[email protected] keepalived]# service keepalived stop

停止 keepalived:                                          [确定]

 

 

对BL从:

[[email protected] keepalived]# ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

    link/ether 00:0c:29:34:d3:f8 brd ff:ff:ff:ff:ff:ff

    inet 192.168.18.30/24 brd 192.168.18.255 scope global eth1

    inet 192.168.18.41/32 scope global eth1

    inet6 fe80::20c:29ff:fe34:d3f8/64 scope link

       valid_lft forever preferred_lft forever

 

[[email protected] keepalived]# ipvsadm -l -n

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn

TCP  192.168.18.41:80 rr

  -> 192.168.18.32:80             Route   1      0          0         

  -> 192.168.18.33:80             Route   1      0          0  

 

 

 

NFS+raid5+LVM     192.168.18.34

对NFS:

[[email protected] ~]# fdisk -l|grep dev

Disk /dev/sda: 21.5 GB, 21474836480 bytes

/dev/sda1   *           1          64      512000   83  Linux

/dev/sda2              64        2611    20458496   8e  Linux LVM

Disk /dev/sdb: 2147 MB, 2147483648 bytes

Disk /dev/sdc: 2147 MB, 2147483648 bytes

Disk /dev/sdd: 2147 MB, 2147483648 bytes

Disk /dev/sde: 2147 MB, 2147483648 bytes

Disk /dev/mapper/vg_doo2-lv_root: 18.9 GB, 18865979392 bytes

Disk /dev/mapper/vg_doo2-lv_swap: 2080 MB, 2080374784 bytes

[[email protected] ~]# yum -y install parted

 

[[email protected] ~]# parted /dev/sdb

GNU Parted 2.1

使用 /dev/sdc

Welcome to GNU Parted! Type ‘help‘ to view a list of commands.

(parted) mklabel                                                          

新的磁盘标签类型? gpt

(parted) mkpart                                                           

分区名称?  []? a                                                         

文件系统类型?  [ext2]? ext3                                              

起始点? 1                                                                

结束点? -1                                                               

(parted) p                                                                

Model: VMware, VMware Virtual S (scsi)

Disk /dev/sdc: 2147MB

Sector size (logical/physical): 512B/512B

Partition Table: gpt

 

Number  Start   End     Size    File system  Name  标志

 1      1049kB  2146MB  2145MB               a

 

(parted) q                                                                

信息: You may need to update /etc/fstab.    

 

同样的方法对 c d e....

 

[[email protected] ~]# yum -y install mdadm

[[email protected] ~]# mdadm -Cv /dev/md5 -a yes -n3 -x1 -l5 /dev/sd[b-e]1

mdadm: layout defaults to left-symmetric

mdadm: layout defaults to left-symmetric

mdadm: super1.x cannot open /dev/sdb1: Device or resource busy

mdadm: Cannot use /dev/sdb1: It is busy

mdadm: cannot open /dev/sdb1: Device or resource busy

[[email protected] ~]# mkfs.ext3  /dev/md5

[[email protected] ~]# mdadm -D -s >/etc/mdadm.conf

[[email protected] ~]# sed -i ‘1 s/$/ auto=yes/‘ /etc/mdadm.conf

 

LVM:

[[email protected] ~]# pvcreate /dev/md5

  Physical volume "/dev/md5" successfully created

[[email protected] ~]# vgcreate vg0 /dev/md5

  Volume group "vg0" successfully created

[[email protected] ~]# lvcreate -L 2G -n web vg0

  Logical volume "web" created

[[email protected] ~]# mkdir /web

[[email protected] ~]# mkfs.ext4 /dev/vg0/web

[[email protected] ~]# mount /dev/vg0/web /web/

[[email protected] ~]# echo "doo">/web/index.html

[[email protected] ~]# vim /etc/fstab

……

/dev/vg0/web /web ext4 defaults 1 2

 

[[email protected] ~]# vim /etc/exports

/web 192.168.1.0/24(rw,sync,no_root_squash)

 

[[email protected] ~]# /etc/init.d/rpcbind start

[[email protected] ~]# /etc/init.d/nfs start

[[email protected] ~]# showmount -e 192.168.18.34

Export list for 192.168.18.34:

/web 192.168.18.0/24

 

对web1和web2挂载

[[email protected] ~]# vim /etc/fstab

……

192.168.18.34:/web /var/www/html nfs defaults,_netdev 1 2

[[email protected] ~]# yum -y install nfs-utils

[[email protected] ~]# mount 192.168.18.34:/web /var/www/html/

 

 

客户端测试:

多刷新几次

[[email protected] keepalived]# ipvsadm -L -n -c

IPVS connection entries

pro expire state       source             virtual            destination

TCP 01:58  FIN_WAIT    192.168.18.140:53988 192.168.18.41:80   192.168.18.32:80

TCP 01:57  FIN_WAIT    192.168.18.140:53977 192.168.18.41:80   192.168.18.33:80

TCP 00:58  SYN_RECV    192.168.18.140:53998 192.168.18.41:80   192.168.18.33:80

TCP 01:57  FIN_WAIT    192.168.18.140:53981 192.168.18.41:80   192.168.18.32:80

TCP 01:57  FIN_WAIT    192.168.18.140:53974 192.168.18.41:80   192.168.18.32:80

TCP 01:57  FIN_WAIT    192.168.18.140:53976 192.168.18.41:80   192.168.18.32:80

时间: 2024-07-30 10:16:53

LVS(DR)+keepalived+nfs+raid+LVM的相关文章

lvs(DR)+keepalived+mysqld主从

三台机器:director(eth0192.168.0.8, vip eth0:0: 192.168.0.101)real server1(eth0 rip: 192.168.0.140 vip lo:0:192.168.0.101)real server2(eth0 rip: 192.168.0.141, vip lo:0:192.168.0.101) 1.自己编写的一键源码安装的lnmp脚本 2.安装LVS(DR) yum install ipvsadm Director 上 vim /us

虚拟机 搭建LVS + DR + keepalived 高可用负载均衡

一:环境说明:   LVS-DR-Master:    10.3.0.82   LVS-DR-Backup:    10.3.0.70   VIP:                10.3.0.60   RealServer1:    10.3.0.69   RealServer1:    10.3.0.83   Centos:5.7    二:安装LVS和Keepalvied软件包   1. 下载相关软件包    mkdir /usr/local/src/lvs    cd /usr/loca

lvs/dr+keepalived搭建成功后,开启防火墙,虚拟IP不能访问,但是真实IP却可以访问

lvs/dr+keepalived搭建成功后,开启防火墙,虚拟IP不能访问,但是真实IP却可以访问,如果关闭防火墙虚拟IP就可以访问网站了,这个问题肯定是防火墙在作怪. 经过这两的不懈奋斗和大家的帮助,我的lvs在开启了iptables后,终于实现了负载均衡了,添加的几条命令分别是1.在lvs上执行iptables -t nat -A PREROUTING -p tcp -d 192.168.1.200 --dport 80 -j REDIRECT iptables -I INPUT -i et

LVS(DR)+Keepalived 服务器集群

LVS(DR)+Keepalived 服务器集群 LVS轮询服务器 可以保障当一个网站服务器宕掉后,仍能正常访问 Keepalived高可用服务器 可以保障当LVS宕掉后,仍能正常工作 搭建服务器集群 192.168.4.50 LVS+Keepalived VIP 192.168.4.253 192.168.4.53 192.168.4.51 网站服务器 192.168.4.52 [50:53] #yum -y install keepalived ipvsadm vim /etc/keepal

RHEL 5.4下部署LVS(DR)+keepalived实现高性能高可用负载均衡

原文地址:http://www.cnblogs.com/mchina/archive/2012/05/23/2514728.html 一.简介 LVS是Linux Virtual Server的简写,意即Linux虚拟服务器,是一个虚拟的服务器集群系统.本项目在1998年5月由章文嵩博士成立,是中国国内最早出现的自由软件项目之一. 目前有三种IP负载均衡技术(VS/NAT.VS/TUN和VS/DR):十种调度算法(rr|wrr|lc|wlc|lblc|lblcr|dh|sh|sed|nq). K

LVS/DR + keepalived配置

1.LVS/DR 搭建的不足及解决方法 前面的lvs虽然已经配置成功也实现了负载均衡,但是我们测试的时候发现,当某台real server把httpd进程停掉,那么director照样会把请求转发过去,这样就造成了某些请求不正常.所以需要有一种机制用来检测real server的状态,这就是keepalived.它的作用除了可以检测rs状态外,还可以检测备用director的状态,也就是说keepalived可以实现ha集群的功能,当然了也需要一台备用director. 2.搭建环境前的工作 h

基于云端虚拟机的LVS/DR+Keepalived+nginx的高可用集群架构配置

最近,公司要我部署一个集群架构,选来选取还是选择了大家都很熟悉的基于DR的LVS+Keepalived做负载分发,然后使用轻量级的nginx做中间代理层,这里呢先暂时实现一个简单web应用功能,对于代理功能和后续的web层部署.数据层安排将择机更新! 首先看一下集群框架:   下面开始我们的集群搭建行程: 一.IP规划: 做一件事情需要想想怎么去做既简单又有条理,我们做项目也应该如此.所以这里一定先做一个简单的规划,对于你后续测试的时候出现不能分发的情况有所解决. 负载均衡层          

Linux下部署LVS(DR)+keepalived+Nginx负载均衡

架构部署 LVS/keepalived(master):192.168.21.3 LVS/keepalived(Slave):192.168.21.6 Nginx1:192.168.21.4 Nginx2:192.168.21.5 VIP:192.168.21.10 1.安装ipvsadm.keepalived(Master/Slave) yum -y install keepalived ipvsadm 2.修改keepalived.conf文件 LVS_master cd /etc/keep

LVS DR+keepalived实现HA+LB

首先需要4台服务器 2台dir分别是  dirmaster  dirslave 2台RS 分别运行nginx测试服务 dir 192.168.186.129 192.168.186.135 RS 192.168.186.130 192.168.186.132 首先搭建LVS-DR服务 a.dir上面安装ipvsadm yum install ipvsadm b.更改ipvsadm配置文件 vim /etc/usr/local/sbin/lvs_dr.sh #! /bin/bashecho 1 >