基于keepalived实现haproxy高可用

keepalived实现haproxy高可用模型

keepalived节点1

----------------------------------------------------------------

vrrp_script chk_haproxy {

script "killall -0 haproxy"

interval 1

weight -2

}

vrrp_instance VI_1 {

state MASTER

interface eth1

virtual_router_id 51

priority 100

advert_int 1

authentication {

auth_type PASS

auth_pass 12345

}

virtual_ipaddress {

172.16.58.100

}

track_script {

chk_haproxy

chk_mantaince_down

}

notify_master "/etc/keepalived/notify.sh master ‘172.16.58.100‘"

notify_backup "/etc/keepalived/notify.sh backup ‘172.16.58.100‘"

notify_fault "/etc/keepalived/notify.sh fault ‘172.16.58.100‘"

}

vrrp_instance VI_2 {

state BACKUP

interface eth1

virtual_router_id55

priority 99

advert_int 1

authentication {

auth_type PASS

auth_pass 54321

}

virtual_ipaddress {

172.16.58.200

}

track_script {

chk_haproxy

}

notify_master "/etc/keepalived/notify.sh master ‘172.16.58.200‘"

notify_backup "/etc/keepalived/notify.sh backup ‘172.16.58.200‘"

notify_fault "/etc/keepalived/notify.sh fault ‘172.16.58.200‘"

}

----------------------------------------------------------------

keepalived节点2

----------------------------------------------------------------

vrrp_script chk_haproxy {

script "killall -0 haproxy"

interval 1

weight -2

}

vrrp_instance VI_1 {

state BACKUP

interface eth1

virtual_router_id 51

priority 99

advert_int 1

authentication {

auth_type PASS

auth_pass 12345

}

virtual_ipaddress {

172.16.58.100

}

track_script {

chk_haproxy

chk_mantaince_down

}

notify_master "/etc/keepalived/notify.sh master ‘172.16.58.100‘"

notify_backup "/etc/keepalived/notify.sh backup ‘172.16.58.100‘"

notify_fault "/etc/keepalived/notify.sh fault ‘172.16.58.100‘"

}

vrrp_instance VI_2 {

state MASTER

interface eth1

virtual_router_id55

priority 100

advert_int 1

authentication {

auth_type PASS

auth_pass 54321

}

virtual_ipaddress {

172.16.58.200

}

track_script {

chk_haproxy

}

notify_master "/etc/keepalived/notify.sh master ‘172.16.58.200‘"

notify_backup "/etc/keepalived/notify.sh backup ‘172.16.58.200‘"

notify_fault "/etc/keepalived/notify.sh fault ‘172.16.58.200‘"

}

----------------------------------------------------------------------------------

通知脚本,每个节点一份都一样。其实就是利用keepalived状态转换之间触发的通知脚本来控***务的启动

----------------------------------------------------------------------------------

#!/bin/bash

#

vip=$2

contact=‘[email protected]‘

notify() {

mailsubject="`hostname` to be $1: $vip floating"

mailbody="`date ‘+%F %H:%M:%S‘`: vrrp transition, `hostname` changed to be $1"

echo $mailbody | mail -s "$mailsubject" $contact

}

case "$1" in

master)

notify master

/etc/rc.d/init.d/haproxy start

exit 0

;;

backup)

notify backup

killall -0 haproxy &> /dev/null || /etc/rc.d/init.d/haproxy restart

exit 0

;;

fault)

notify fault

/etc/rc.d/init.d/haproxy stop

exit 0

;;

*)

echo ‘Usage: `basename $0` {master|backup|fault}‘

exit 1

;;

esac

----------------------------------------------------------------------------------

haproxy动静分离配置文件,两个节点提供服务一样,配置文件当然也一样。

----------------------------------------------------------------------------------

global

# 定义日志,传递给系统rsyslog

log         127.0.0.1 local2

chroot      /var/lib/haproxy

pidfile     /var/run/haproxy.pid

# 单进程能够启动的最大连接数

maxconn     4000

user        haproxy

group       haproxy

daemon

# turn on stats unix socket

stats socket /var/lib/haproxy/stats

#==============================

# 默认配置,下面前端和后端没定义的将集成这里的定义

#==============================

defaults

mode                    http

log                     global

option                  httplog

option                  dontlognull

option http-server-close

option forwardfor       except 127.0.0.0/8

option                  redispatch

retries                 3

timeout http-request    10s

timeout queue           1m

timeout connect         10s

timeout client          1m

timeout server          1m

timeout http-keep-alive 10s

timeout check           10s

maxconn                 3000

#==============================

# 前端开启80端口,acl对访问7层信息进行过滤

#==============================

frontend  web

bind *:80

acl status          path_beg       -i       /haproxy

acl url_static      path_end       -i       .html .htm .jpg .gif .png .css .js .ico .xml

redirect location http://172.16.58.2:8009/admin?stats if status

use_backend static                 if url_static

default_backend                    dynamic

#==============================

# 静态服务器,纯粹负载均衡

#==============================

backend static

balance roundrobin

server node2 172.16.58.3:80 check

server node2 172.16.58.4:80 check

#==============================

# 后端动态服务器,一致性hash算法的source,实现session保持

#==============================

backend dynamic

hash-type consistent

balance source

server node1 172.16.58.5:80 check

server node2 172.16.58.6:80 check

#==============================

# 单独绑定状态输出页面

#==============================

listen status

bind *:8009

stats hide-version

stats enable

stats auth sun:google

stats admin if TRUE

stats uri /admin?stats

----------------------------------------------------------------------------------

测试静态页面轮询VIP1

测试静态页面轮询VIP2

状态页面

=======================================================================

#!/bin/bash

inotifywait -mrq  -e modify,delete,create,attrib /www/docs/node1/ | while read files;do

rsync -vzrtopg --delete --progress --password-file=/etc/rsyncd.passwd /www/docs/node1/ [email protected]::web

done

=======================================================================

基于keepalived实现haproxy高可用

时间: 2024-08-02 06:45:47

基于keepalived实现haproxy高可用的相关文章

基于keepalived的Haproxy高可用配置

一.概述: HAProxy是一个用于4层或7层的高性能负载均衡软件,在大型网站的大型Web服务器群集中,HAProxy可用来替代专业的硬件负载均衡设备,节省大量的开支. 通常情况下,为了避免整个体系中出现单点故障,在至关重要的架构中,都需要部署备份设备,同样,负载均衡设备也不能部署单台,一旦主设备出现问题之后,备份设备可对主设备进行接管.实现不间断的服务,这便是Keepalived的作用. 于是,HAProxy和Keepalived的组合便成了省钱高效的Web服务器负载均衡架构. 拓扑图: 二.

HAProxy基于KeepAlived实现Web高可用及动静分离

    前言     软件负载均衡一般通过两种方式来实现:基于操作系统的软负载实现和基于第三方应用的软负载均衡.LVS是基于Linux操作系统实现的一种软负载,而Haproxy则是基于第三方应用实现的软负载.Haproxy相比LVS的使用要简单很多,但跟LVS一样,Haproxy自己并不能实现高可用,一旦Haprox节点故障,将会影响整个站点.本文是haprox基于keepalived实现web高可用及动静分离.     相关介绍         HAProxy     haproxy是一款提供

基于Keepalived实现Mysql高可用

前言 由于最近要使用Mysql数据库,而目前公司服务器与业务有限,于是只使用了一台Mysql.所以,问题很明显,如果这台Mysql坏了,那么将会影响整个公司的业务,所以考虑做Mysql的高可用方案.目前,Mysql的高可用方案很多,这里选择Keepalived+Mysql实现高可用. 环境介绍 ID OS IP Role node1 CentOS6.5_X64 192.168.1.159 Master node2 CentOS6.5_X64 192.168.1.160 Slave  Mysql

基于keepalived的nginx高可用

#nginx,keepalived安装略过 MASTER 节点配置文件(192.168.1.11) vi /etc/keepalived/keepalived.conf global_defs { ##keepalived自带的邮件提醒需要开启sendmail服务.建议用独立的监控或第三方SMTP ##标识本节点的字条串,通常为 hostname router_id 192.168.1.11 } ##keepalived会定时执行脚本并对脚本执行的结果进行分析,动态调整vrrp_instance

linux下安装haproxy作为端口转发服务器,以及安装keepalived作为haproxy高可用方案

一.安装haproxy作为端口转发服务器(主服务器:172.28.5.4,备服务器:172.28.5.8,浮点IP为:172.28.5.6) 1.下载 cd /usr/local/src wget https://github.com/haproxy/haproxy/archive/v1.5-dev20.tar.gz 2.解压 tar - zvxf v1.5-dev20.tar.gz cd haproxy-1.5-dev20 3.编译 make TARGET=linux26 prefix=/us

基于Keepalived的MySQL高可用

keepalived负责的是故障转移,至于故障转以后的节点之间数据的一致性问题依赖于具体的复制模式.不管是主从.一主多从还是双主.集群节点个数.主从具体的模式无关(常规复制,半同步复制,GTID复制,多线程复制,甚至可以是MGR)都没有直接的关系.个人认为,MySQL高可用方向,MGR+自动故障转移中间件(keepalived),应该是是个趋势.怎么感觉MHA的配置又臭又长. keepalive的安装 1,参考http://blog.51cto.com/afterdawn/1888682 1.官

L10.2 keepalive 实现haproxy高可用(双主模型)

keepalived实现haproxy高可用. haproxy实现RS负载均衡. 说明: 1,当keepalive节点状态master时,haproxy应该是start状态,backup为restart,fault为stop状态:(注意脚本监测权限减5,这样会有数字计算优先级方面的问题,因此在backup状态也使用restart haproxy) 2,对haproxy的start,restart,stop,都将用脚本的方式实现,在keepalived配置中调用脚本: 3,我们不只对keepali

基于keepalived对HAproxy做高可用集群

一.Keepalived简介 Keepalived的作用是检测web服务器的状态,如果有一台web服务器死机,或工作出现故障,Keepalived将检测到,并将有故障的web服务器从系统中剔除,当web服务器工作正常后Keepalived自动将web服务器加入到服务器群中,这些工作全部自动完成,不需要人工干涉,需要人工做的只是修复故障的web服务器. Layer3,4&7工作在IP/TCP协议栈的IP层,TCP层,及应用层,原理分别如下: Layer3:Keepalived使用Layer3的方式

Keepalived配置实现HaProxy高可用

这次,小编就先写一篇对Keepalived的配置,那么在学习之前,我们首先要了解Keepalived是什么,以及为什么要用Keepalived. 实际上,Keepalived不仅仅是实现HaProxy的高可用,小编这里只是拿HaProxy来做一个示例而已,根据这个示例,进行稍微的改动基本就可以实现其他服务的高可用. 在此之前,小编就先来说说为什么要用Keepalived来实现负载均衡器高可用,小编这里只拿HaProxy负载均衡器来进行说明: 对于所有懂运维的小伙伴来说,都应该知道,无论后端的服务