实例:LVS+Keepalived配置LVS的高可用

LVS+Keepalived配置LVS的高可用

我们这里LVS-DR模型的高可用集群:

实验环境:
    vm1 LVS-DR1:
             eth0 172.16.3.2/16
             VIP :eth0:0 172.16.3.88         
    vm2 LVS-DR2:
            eth0 172.16.3.3/16
    vm3 Server-web1
            RS1: eth0 172.16.3.1/16
            VIP: lo:0 172.16.3.88/16
    vm4 Server-web2
            RS2: eth0 172.16.3.10/16
            VIP: lo:0 172.16.3.88/16

测试机:实体本机:IP: 172.16.3.100           
1、vm3 Server-web1配置:
      # ifconfig eth0 172.16.3.1/16 up                             RIP1
      # echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore 
      # echo 1 > /proc/sys/net/ipv4/conf/eth0/arp_ignore (由于是一个网卡,也可以指定的接口为lo:0)
      # echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
      # echo 2 > /proc/sys/net/ipv4/conf/eth0/arp_announce
      # ifconfig lo:0 172.16.3.88 netmask 255.255.255.255 broadcast 172.16.3.88    VIP
      # route add -host 172.16.3.88 dev lo:0
    web1主页面为
      # yum install nginx
      # echo "172.16.3.1" > /usr/share/nginx/html/index.html
2、vm4 Server-web2配置:
      # ifconfig eth0 172.16.3.10/16 up                        RIP2
      # echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
      # echo 1 > /proc/sys/net/ipv4/conf/eth0/arp_ignore (由于是一个网卡,也可以指定的接口为lo:0)
      # echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
      # echo 2 > /proc/sys/net/ipv4/conf/eth0/arp_announce
      # ifconfig lo:0 172.16.3.88 netmask 255.255.255.255 broadcast 172.16.3.88    VIP
      # route add -host 172.16.3.88 dev lo:0
    web2主页面为
      #yum install nginx
      # echo "172.16.3.10" > /usr/share/nginx/html/index.html
3、vm1 LVS-DR1 (我们这里的vm1为)配置
      # yum install keepalived
# vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
   notification_email {
    [email protected]
   }
   notification_email_from [email protected]
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
#   router_id LVS_DEVEL
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 88
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass ning
    }
    virtual_ipaddress {
    172.16.3.88
    }
}

virtual_server 172.16.3.88 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    nat_mask 255.255.0.0
#    persistence_timeout 50
    protocol TCP
   sorry_server 127.0.0.1 80
    real_server 172.16.3.1 80 {
        weight 1
        HTTP_GET {
            url {
              path /
         status_code 200
            }
            connect_timeout 2
            nb_get_retry 3
            delay_before_retry 1
        }
    }

real_server 172.16.3.10 80 {
        weight 1
        HTTP_GET {
            url {
              path /
         status_code 200
            }
            connect_timeout 2
            nb_get_retry 3
            delay_before_retry 1
        }
    }
}
由于我们添加了sorry_service 所以我们在DR节点上也安装了nginx(方便测试)
  # yum install nginx
  # echo "172.16.3.2" > /usr/share/nginx/html/index.html
4、vm2 LVS-DR2 (我们这里的vm2配置文件)配置
    # yum install keepalived     
# vim  /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
     [email protected]
     [email protected]
     [email protected]
   }
   notification_email_from [email protected]
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}

vrrp_instance VI_1 {
    state BACKUP               主要是这里不同
    interface eth0
    virtual_router_id 88           
    priority 99                  优先级不同
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass ning
    }
    virtual_ipaddress {
    172.16.3.88
    }
}

virtual_server 172.16.3.88 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    nat_mask 255.255.0.0
#    persistence_timeout 50
    protocol TCP
   sorry_server 127.0.0.1 80  这里我们添加了sorry_server,就是说,两个web服务器都不在线时,就给客户提供一个页面,(这里我们指向了自己的主机,,可以指定另外一台webserver)
    real_server 172.16.3.1 80 {
        weight 1
        HTTP_GET {
            url {
              path /
         status_code 200
            }
            connect_timeout 2
            nb_get_retry 3
            delay_before_retry 1
        }
    }

real_server 172.16.3.10 80 {
        weight 1
        HTTP_GET {
            url {
              path /
         status_code 200
            }
            connect_timeout 2
            nb_get_retry 3
            delay_before_retry 1
        }
    }
}

由于我们添加了sorry_service 所以我们在DR节点上也安装了nginx(方便测试)
  # yum install nginx
  # echo "172.16.3.3" > /usr/share/nginx/html/index.html

5、测试
http://172.16.3.88
  (1)我们测试keepalived集群负责能不能用?(这里不做太多说明)
        用到的命令:
        # service keepalived stop|start
        # ip addr show
  (2)测试是否可以轮训
     直接在实体机:
http://172.16.3.88测试即可
(3)测试sorry_server
    关掉所有的 web-server(vm3、vm4)
      # service nginx stop
http://172.16.3.88测试即可
=====================================================================================
双主模型案例:
在上面的基础上lvs-dr双主模型的配置文件
这里我们没有配置,VIP1得地址和禁用同步MAC
1、vm3 添加 VIP2
    # ifconfig lo:1 172.16.3.188 netmask 255.255.255.255 broadcast 172.16.3.188 up
    # route add -host 172.16.3.188 dev lo:1
2、vm4 添加 VIP2
    # ifconfig lo:1 172.16.3.188 netmask 255.255.255.255 broadcast 172.16.3.188 up
    # route add -host 172.16.3.188 dev lo:1
3、vm1 LVS-DR1 (我们这里的vm1为)配置
    # cat keepalived.conf
    ! Configuration File for keepalived

global_defs {
       notification_email {
        [email protected]
       }
       notification_email_from [email protected]
       smtp_server 127.0.0.1
       smtp_connect_timeout 30
    }
    vrrp_instance VI_1 {          vm1主
        state MASTER
        interface eth0
        virtual_router_id 88
        priority 100
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass ning
        }
        virtual_ipaddress {
        172.16.3.88                                VIP1
        }
    }
    vrrp_instance VI_2 {        vm2 从
        state BACKUP                         注意的地方
        interface eth0
        virtual_router_id 90                    注意地方
        priority 99                            注意地方
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass ning1
        }
        virtual_ipaddress {
        172.16.3.188                            VIP2
        }
    }

virtual_server 172.16.3.88 80 {             vm1 web-server  VIP1
        delay_loop 6
        lb_algo rr
        lb_kind DR
        nat_mask 255.255.0.0
    #    persistence_timeout 50
        protocol TCP
       sorry_server 127.0.0.1 80
        real_server 172.16.3.1 80 {
            weight 1
            HTTP_GET {
                url {
                  path /
             status_code 200
                }
                connect_timeout 2
                nb_get_retry 3
                delay_before_retry 1
            }
        }

real_server 172.16.3.10 80 {
            weight 1
            HTTP_GET {
                url {
                  path /
             status_code 200
                }
                connect_timeout 2
                nb_get_retry 3
                delay_before_retry 1
            }
        }
    virtual_server 172.16.3.188 80 {                    vm2 web-servr VIP2
        delay_loop 6
        lb_algo rr
        lb_kind DR
        nat_mask 255.255.0.0
    #    persistence_timeout 50
        protocol TCP
       sorry_server 127.0.0.1 80
        real_server 172.16.3.1 80 {
            weight 1
            HTTP_GET {
                url {
                  path /
             status_code 200
                }
                connect_timeout 2
                nb_get_retry 3
                delay_before_retry 1
            }
        }

real_server 172.16.3.10 80 {
            weight 1
            HTTP_GET {
                url {
                  path /
             status_code 200
                }
                connect_timeout 2
                nb_get_retry 3
                delay_before_retry 1
            }
        }
    }
    }

4、vm2 LVS-DR2 (我们vm2的配置文件配置)配置
    # cat keepalived.conf
    ! Configuration File for keepalived

global_defs {
       notification_email {
        [email protected]
       }
       notification_email_from [email protected]
       smtp_server 127.0.0.1
       smtp_connect_timeout 30
    }
    vrrp_instance VI_1 {
        state BACKUP                      注意
        interface eth0
        virtual_router_id 88               注意
        priority 99                        注意
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass ning                 注意
        }
        virtual_ipaddress {
        172.16.3.88                     注意
        }
    }
    vrrp_instance VI_2 {
        state MASTER                  注意
        interface eth0
        virtual_router_id 90         注意
        priority 100                注意
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass ning1          注意
        }
        virtual_ipaddress {
        172.16.3.188                  注意
        }
    }

virtual_server 172.16.3.88 80 {
        delay_loop 6
        lb_algo rr
        lb_kind DR
        nat_mask 255.255.0.0
    #    persistence_timeout 50
        protocol TCP
       sorry_server 127.0.0.1 80
        real_server 172.16.3.1 80 {
            weight 1
            HTTP_GET {
                url {
                  path /
             status_code 200
                }
                connect_timeout 2
                nb_get_retry 3
                delay_before_retry 1
            }
        }

real_server 172.16.3.10 80 {
            weight 1
            HTTP_GET {
                url {
                  path /
             status_code 200
                }
                connect_timeout 2
                nb_get_retry 3
                delay_before_retry 1
            }
        }
    virtual_server 172.16.3.188 80 {
        delay_loop 6
        lb_algo rr
        lb_kind DR
        nat_mask 255.255.0.0
    #    persistence_timeout 50
        protocol TCP
       sorry_server 127.0.0.1 80
        real_server 172.16.3.1 80 {
            weight 1
            HTTP_GET {
                url {
                  path /
             status_code 200
                }
                connect_timeout 2
                nb_get_retry 3
                delay_before_retry 1
            }
        }

real_server 172.16.3.10 80 {
            weight 1
            HTTP_GET {
                url {
                  path /
             status_code 200
                }
                connect_timeout 2
                nb_get_retry 3
                delay_before_retry 1
            }
        }
    }
    }
5 、测试:
   (1)双主模型是否开启(我们这里只开启vm1)
      # service keepalived start
      # ip addr show
      3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
        link/ether 00:0c:29:d7:f7:9c brd ff:ff:ff:ff:ff:ff
        inet 172.16.3.2/16 brd 172.16.255.255 scope global eth0
        inet 172.16.3.88/32 scope global eth0                -----------VIP1
        inet 172.16.3.188/32 scope global eth0                ------------VIP2
        inet6 fe80::20c:29ff:fed7:f79c/64 scope link
       valid_lft forever preferred_lft forever
    (2)再次启动vm2
        # service keepalived start
        在vm1上查看
        # ip addr show
        3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
        link/ether 00:0c:29:d7:f7:9c brd ff:ff:ff:ff:ff:ff
        inet 172.16.3.2/16 brd 172.16.255.255 scope global eth0
        inet 172.16.3.88/32 scope global eth0            ---------------VIP1
        inet6 fe80::20c:29ff:fed7:f79c/64 scope link
       valid_lft forever preferred_lft forever
       在vm2上查看‘
       # ip addr show
       2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
        link/ether 00:0c:29:0b:35:6a brd ff:ff:ff:ff:ff:ff
        inet 172.16.3.3/16 brd 172.16.255.255 scope global eth0
        inet 172.16.3.188/32 scope global eth0          -----------------VIP2
        inet6 fe80::20c:29ff:fe0b:356a/64 scope link
       valid_lft forever preferred_lft forever
    (3)测试主页
        本机上测试:都会轮训
http://172.16.3.88
http://172.16.3.188

时间: 2024-10-08 08:16:02

实例:LVS+Keepalived配置LVS的高可用的相关文章

LVS+keepalived实现负载均衡&高可用

一.实验环境需求&准备 我们这次实验要完成的一个架构如下图所示,我们通过LVS-DR-MASTER,LVS-DR-BACKUP作为LVS负载均衡调度器,并且两者之间通过keepalived来两者之间的HA.keepalived本身就是为了LVS为开发的,所以说我们通过keepalived来进行LVS的配置就显得十分的方便.而且keepalived是直接操作ip_vs不用通过ipvsadm,所以更加方便. 1)实验架构图&需求表: 角色 IP地址 备注 主LVS调度器(MASTER) 192

Lvs+keepalived+httpd+NFS搭建高可用

Lvs+keepalived+httpd+NFS搭建高可用 自己捯饬的模型图 NAT模型图 注意事项:RealServer需要把网关指向Director,并且Director要打开转发功能命令如下: echo "1" > /proc/sys/net/ipv4/ip_foreward DR模型图 注意事项:需要在RealServer配置如下信息: echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 1 > /proc

LVS基础及LVS+Keepalived实现双主高可用负载均衡

LVS基础及LVS+Keepalived实现双主高可用负载均衡 LB集群: (Load  Balancing)即负载均衡集群,其目的是为了提高访问的并发量及提升服务器的性能,其    实现方式分为硬件方式和软件方式. 硬件实现方式: 常用的有 F5公司的BIG-IP系列.A10公司的AX系列.Citrix公司的 NetScaler系列等 软件实现方式: LVS工作于传输层.nginx工作于应用层.haproxy工作于传输层和应用层 本文主要讲解LVS. 一.什么是LVS ? 1. LVS:(Li

LVS+Keepalived实现DBProxy的高可用

背景 在上一篇文章美团点评DBProxy读写分离使用说明实现了读写分离,但在最后提了二个问题:一是代理不管MySQL主从的复制状态,二是DBProxy本身是一个单点的存在.对于第一个可以通过自己定义的检测规则进行操作Admin接口,实现主从状态异常的处理.而对于第二个问题,需要再起一个DBProxy来防止单点故障,本文通过介绍LVS来实现DBProxy的负载均衡和高可用.MySQL的架构如下: LVS基础 http://www.linuxvirtualserver.org/zh/lvs1.htm

Keepalived配置实现HaProxy高可用

这次,小编就先写一篇对Keepalived的配置,那么在学习之前,我们首先要了解Keepalived是什么,以及为什么要用Keepalived. 实际上,Keepalived不仅仅是实现HaProxy的高可用,小编这里只是拿HaProxy来做一个示例而已,根据这个示例,进行稍微的改动基本就可以实现其他服务的高可用. 在此之前,小编就先来说说为什么要用Keepalived来实现负载均衡器高可用,小编这里只拿HaProxy负载均衡器来进行说明: 对于所有懂运维的小伙伴来说,都应该知道,无论后端的服务

centos 7 LVS+keepalived实现nginx的高可用以及负载均衡

一.准备工作:关闭防火墙,selinux以免对实验结果造成影响,准备虚机,设置IP地址.主机名 hostname:Nginx01 IP:192.168.1.87 Role:Nginx Server hostname:Nginx02 IP: 192.168.1.88 Role:Nginx Server hostname:LVS01 IP: 192.168.1.89 Role:LVS+Keepalived hostname:LVS02 IP: 192.168.1.90 Role:LVS+Keepal

通过LVS+Keepalived实现exchagne2016访问高可用问题

最近在做一个exchange 2010升级到exchange 2016的项目,exchange2016访问高可用通过LVS+Keepalived实现,LVS采用DR模式,如下图: 具体的工作原理大家可以从网上找,有很多这方面的介绍. 具体LVS+Keepalived的配置方式,网卡的资料挺多的,大家可以去搜一下. 今天主要是把配置后遇到的问题给大家做个分享: 问题描述: 2台Centos7.5安装在VMware环境.2台exchange 2016安装在HP的刀片服务器上,它们都在同一个网段. 配

LVS+keepalived+DR 负载均衡高可用

准备两台web服务器,准备好测试页面,以便查看结果.(centos6) 1.下载相关服务,并关闭防火墙和SElinux yum -y install httpd (一般自带都有) yum -y install keepalived (master和slave) yum -y install ipvsadm (master) service iptables stop setenforce 0 2.分别在两台web服务器添加一下测试页面添加不同内容 vim /var/www/html/index.

mysql(五)-----keepalived配置mysql的高可用

生产环境对数据库要求很高的,为了避免数据库的突发情况,给他做个保险--用keepalived做高可用环境(此处ip,密码均是乱造的):主:192.1.31.161 端口:3306 用户:vnum 密码:[email protected] 从:192.1.31.162 端口:3306 方案介绍 两台mysql互为主从,但只有master写,slave只负责读.主从通过keepalive做成高可用,当master出问题, 由slave接替master工作,即读写都在slave操作.当master恢复