构建高可用集群Keepalived+Haproxy

重点概念
vrrp_script中节点权重改变算法
vrrp_script 里的script返回值为0时认为检测成功,其它值都会当成检测失败;
weight 为正时,脚本检测成功时此weight会加到priority上,检测失败时不加;
主失败:
主 priority < 从 priority + weight 时会切换。
主成功:
主 priority + weight > 从 priority + weight 时,主依然为主
weight 为负时,脚本检测成功时此weight不影响priority,检测失败时priority – abs(weight)
主失败:
主 priority – abs(weight) < 从priority 时会切换主从
主成功:
主 priority > 从priority 主依然为主

主要贴配置:

一台:

vi /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {

notification_email {

[email protected]

}

notification_email_from [email protected]

smtp_connect_timeout 3

smtp_server 127.0.0.1

router_id Iptables

}

vrrp_script chk_maintaince_down {

script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0"

interval 1

weight 2

}

vrrp_script chk_haproxy {

script "killall -0 haproxy"

interval 1

weight 2

}

vrrp_instance VI_1 {

interface eth0

state MASTER

priority 100

virtual_router_id 125

garp_master_delay 1

authentication {

auth_type PASS

auth_pass 1e3459f77aba4ded

}

track_interface {

eth0

}

virtual_ipaddress {

10.16.37.198/22 dev eth0 label eth0:0

}

track_script {

chk_haproxy

}

notify_master "/etc/keepalived/notify.sh master 10.16.37.198"

notify_fault "/etc/keepalived/notify.sh fault 10.16.37.198"

}

vrrp_instance VI_2 {

interface eth0

state BACKUP

priority 99

virtual_router_id 126

grap_master_delay 1

authentication {

auth_type pass

auth_pass 7615c4b7f518cede

}

track_interface {

eth0

}

virtual_ipaddress {

10.16.37.199/22 dev eth0 label eth0:1

}

track_script {

chk_haproxy

chK_maintaince_down

}

notify_master "/etc/keepalived/notify.sh master 10.16.37.199"

notify_backup "/etc/keepalived/notify.sh backup 10.16.37.199"

notify_fault "/etc/keepalived/notify.sh fault 10.16.37.199"

}

另一台:vi /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {

notification_email {

[email protected]

}

notification_email_from [email protected]

smtp_connect_timeout 3

smtp_server 127.0.0.1

router_id Iptables

}

vrrp_script chk_maintaince_down {

script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0"

interval 1

weight 2

}

vrrp_script chk_haproxy {

script "killall -0 haproxy"

interval 1

weight 2

}

vrrp_instance VI_1 {

interface eth0

state BACKUP

priority 99

virtual_router_id 125

garp_master_delay 1

authentication {

auth_type PASS

auth_pass 1e3459f77aba4ded

}

track_interface {

eth0

}

virtual_ipaddress {

10.16.37.198/22 dev eth0 label eth0:1

}

track_script {

chk_haproxy

}

notify_master "/etc/keepalived/notify.sh master 10.16.37.198"

notify_fault "/etc/keepalived/notify.sh fault 10.16.37.198"

}

vrrp_instance VI_2 {

interface eth0

state MASTER

priority 100

virtual_router_id 126

grap_master_delay 1

authentication {

auth_type pass

auth_pass 7615c4b7f518cede

}

track_interface {

eth0

}

virtual_ipaddress {

10.16.37.199/22 dev eth0 label eth0:0

}

track_script {

chk_haproxy

chK_maintaince_down

}

notify_master "/etc/keepalived/notify.sh master 10.16.37.199"

notify_backup "/etc/keepalived/notify.sh backup 10.16.37.199"

notify_fault "/etc/keepalived/notify.sh fault 10.16.37.199"

}

脚本配置:

vi /etc/keepalived/notify.sh

#!/bin/bash

contact=‘[email protected]‘

notify() {

mailsubject="‘hostname‘ to be $1: $2 floating"

mailbody="‘date ‘+%F %H:%M:%S‘`: vrrp transition, `hostname` changed to be $1"

echo $mailbody | mail -s "$mailsubject" $contact

}

case "$1" in

master)

notify master $2

/etc/rc.d/init.d/haproxy restart

exit 0

;;

backup)

notify backup $2

exit 0

;;

fault)

notify fault $2

exit 0

;;

*)

echo ‘Usage: ‘basename $0‘ {master|backup|fault}‘

exit 1

;;

Esac

Haproxy配置:

vi /etc/haproxy/haproxy.cfg

#---------------------------------------------------------------------

# Example configuration for a possible web application.  See the

# full configuration options online.

#

#   http://haproxy.1wt.eu/download/1.4/doc/configuration.txt

#

#---------------------------------------------------------------------

#---------------------------------------------------------------------

# Global settings

#---------------------------------------------------------------------

global

# to have these messages end up in /var/log/haproxy.log you will

# need to:

#

# 1) configure syslog to accept network log events.  This is done

#    by adding the ‘-r‘ option to the SYSLOGD_OPTIONS in

#    /etc/sysconfig/syslog

#

# 2) configure local2 events to go to the /var/log/haproxy.log

#   file. A line like the following can be added to

#   /etc/sysconfig/syslog

#

#    local2.*                       /var/log/haproxy.log

#

log         127.0.0.1 local2

chroot      /var/lib/haproxy

pidfile     /var/run/haproxy.pid

maxconn     4000

user        haproxy

group       haproxy

daemon

# turn on stats unix socket

stats socket /var/lib/haproxy/stats

#---------------------------------------------------------------------

# common defaults that all the ‘listen‘ and ‘backend‘ sections will

# use if not designated in their block

#---------------------------------------------------------------------

defaults

mode                    http

log                     global

option                  httplog

option                  dontlognull

option http-server-close

option forwardfor       except 127.0.0.0/8

option                  redispatch

retries                 3

timeout http-request    10s

timeout queue           1m

timeout connect         10s

timeout client          1m

timeout server          1m

timeout http-keep-alive 10s

timeout check           10s

maxconn                 3000

#---------------------------------------------------------------------

# main frontend which proxys to the backends

#---------------------------------------------------------------------

#frontend  main *:5000

#    acl url_static       path_beg       -i /static /images /javascript /stylesheets

#    acl url_static       path_end       -i .jpg .gif .png .css .js

#

#    use_backend static          if url_static

#    default_backend             app

#---------------------------------------------------------------------

# static backend for serving up images, stylesheets and such

#---------------------------------------------------------------------

#backend static

#    balance     roundrobin

#    server      static 127.0.0.1:4331 check

#---------------------------------------------------------------------

# round robin balancing between the various backends

#---------------------------------------------------------------------

#backend app

#    balance     roundrobin

#    server  app1 127.0.0.1:5001 check

#    server  app2 127.0.0.1:5002 check

#    server  app3 127.0.0.1:5003 check

#    server  app4 127.0.0.1:5004 check

listen stats

mode http

bind 0.0.0.0:1080

stats enable

stats refresh 30s

maxconn 200

stats hide-version

stats uri     /haproxy-stats

stats realm   Haproxy\ Statistics

stats auth    admin:admin

stats admin if TRUE

frontend http-in

bind *:80

mode http

log global

option httpclose

option logasap

option dontlognull

capture request header Host len 20

capture request header Referer len 60

acl url_static path_beg -i /static /images /javascript /stylesheets

acl url_static path_end         -i .jpg .jpeg .gif .png .css .js .html

use_backend static_servers if url_static

default_backend dynamic_servers

backend static_servers

balance roundrobin

server imgsrv1 10.16.37.101:80 check maxconn 6000

server imgsrv2 10.16.37.94:80 check maxconn 6000

backend dynamic_servers

balance source

server websrv1 10.16.37.94:80 check maxconn 1000

server websrv2 10.16.37.101:80 check maxconn 1000

因为端口使用的是1080需要Iptables开启:

/sbin/iptables –I INPUT –p tcp –dport 1080 –j ACCEPT

/etc/rc.d/init.d/iptables save

Service iptables restart

Vi /etc/selinux/config

关闭selinux然后呢重启!!

二个server web采用nginx+双主mysql数据库,保证了web服务器的高可用性能,一台服务器宕机,另外一台立马连接!!

参考:

http://www.it165.net/admin/html/201405/2957.html

时间: 2024-10-28 20:13:12

构建高可用集群Keepalived+Haproxy的相关文章

负载均衡器HAProxy,高可用集群keepalived,keepalived+lvs

负载均衡器:nginx/haproxy/lvs/F5代理:正向代理:帮助客户端缓存服务器上的数据反向代理:帮助服务器缓存数据 HAProxy:1.安装[[email protected] bin]# yum install -y haproxy2.修改配置文件[[email protected] bin]# vim /etc/haproxy/haproxy.cfg 把# main frontend which proxys to the backends后面部分全部删除,增加以下内容:定义一个监

corosync+pacemaker使用pcs构建高可用集群

一.corosync+pacemaker集群前提准备 集群前提准备 --> HA-web 承接上文 --> corosync+pacemaker使用crmsh构建高可用集群 二.准备pcs [[email protected] ~]# yum install pcs 禁用stonith设备 [[email protected] ~]# pcs property set stonith-enable=false [[email protected] ~]# pcs property set no

基于Keepalived构建高可用集群配置实例(HA Cluster)

什么是集群 简单的讲集群(cluster)就是一组计算机,它们作为一个整体向用户提供一组网络资源.这些单个的计算机系统就是集群的节点(node).一个理想的集群是,用户从来不会意识到集群系统底层的节点,在他/她们看来,集群是一个系统,而非多个计算机系统.并且集群系统的管理员可以随意增加和删改集群系统的节点. 关于更详细的高可用集群我们在后面再做详解,先来说说Keepalived Keepalived是什么 Keepalived是集群管理中保证集群高可用的一个服务软件,其功能类似于heartbea

corosync+pacemaker构建高可用集群

一.集群简介 引自suse官方关于corosync的高可用集群的框架图: 由图,我们可以看到,suse官方将集群的Architecture Layers分成四层.最低层Messaging/Infrastructure Layer提供了HeartBeat节点间传递心跳信息,即为心跳层.第二层Membership Layer层为集群事务决策层,决定了哪些节点作为集群节点,并传递给集群内所有节点,如果集群中成员拥有的法定票数不大于半数,该怎么作出决策等,通俗点讲,就是投票系统,同时,还提供了构建成员关

Centos7上利用corosync+pacemaker+crmsh构建高可用集群

一.高可用集群框架 资源类型: primitive(native):表示主资源 group:表示组资源,组资源里包含多个主资源 clone:表示克隆资源 master/slave:表示主从资源 资源约束方式: 位置约束:定义资源对节点的倾向性 排序约束:定义资源彼此能否运行在同一节点的倾向性 顺序约束:多个资源启动顺序的依赖关系 HA集群常用的工作模型: A/P:两节点,active/passive,工作于主备模型 A/A:两节点,active/active,工作于主主模型 N-M:N>M,N个

Keepalived高可用集群。

Keepalived高可用集群 Keepalived介绍 Keepalived软件起初是专门为LVS负载均衡软件设计的,用来管理并监控LVS集群系统中各个服务节点的状态,后来又加入了可以实现高可用的VRRP功能.因此,Keepalived除了能够管理LVS软件外,还可以作为其他服务(例如:Nginx,Haproxy,MySQL等)的高可用解决方案软件. Keepalived软件主要是通过VRRP协议实现高可用功能的.VRRP是Virtual Router Redundancy Protocol(

Keepalived高可用集群

Keepalived高可用集群 keepalived高可用集群是指一个主服务器,一个备份服务器,共同使用一个虚拟的ip地址,当主服务器宕掉之后,备份服务器开始工作,这样就避免了访问事故. 搭建keepalived高可用集群 [master] #yum -y install keepalived #vim /etc/keepalived/keepalived.conf vrrp_instance webha { state MASTER 主服务器 interface eth0 网卡口 priori

NEC高可用集群软件NEC EXPRESSCLUSTER是一款专业的高可用集群软件产品(双机热备软件)

NEC高可用集群软件NEC EXPRESSCLUSTER是一款专业的高可用集群软件产品(双机热备软件)商务qq1912078946 ,可为您提供Windows和Linux平台上完整的高可用性解决方案.当集群中的某个节点由于软件或硬件原因发生故障时,集群系统可以把IP.客户业务等资源切换到其他健康的节点上,使整个系统能连续不间断的对外提供服务,并且可以通过对系统资源的使用情况进行分析来预防故障,自动判断出最适合运行业务的服务器,并进行切换,从而为机构24x365的关键业务提供了可靠的保障,达到了系

搭建 RabbitMQ Server 高可用集群

阅读目录: 准备工作 搭建 RabbitMQ Server 单机版 RabbitMQ Server 高可用集群相关概念 搭建 RabbitMQ Server 高可用集群 搭建 HAProxy 负载均衡 因为公司测试服务器暂不能用,只能在自己电脑上重新搭建一下 RabbitMQ Server 高可用集群,正好把这个过程记录下来,以便日后查看. 公司测试服务器上的 RabbitMQ 集群,我搭建的是三台服务器,因为自己电脑空间有限,这边只能搭建两台服务器用作高可用集群,用的是 Vagrant 虚拟机