Mycat高可用集群搭建

HaProxy+Keepalived+Mycat高可用集群配置

部署图:

集群部署图理解:

1、keepalived和haproxy必须装在同一台机器上(如192.168.46.161机器上,keepalived和haproxy都要安装),keepalived负责为该服务器抢占vip(虚拟ip),抢占到vip后,对该主机的访问可以通过原来的ip(192.168.46.161)访问,也可以直接通过vip(192.168.46.180)访问。

2、192.168.46.162上的keepalived也会去抢占vip,抢占vip时有优先级,配置keepalived.conf中的(priority 150 #数值愈大,优先级越高,192.168.46.162上改为120,master和slave上该值配置不同)决定。但是一般哪台主机上的keepalived服务先启动就会抢占到vip,即使是slave,只要先启动也能抢到。

3、haproxy负责将对vip的请求分发到mycat上。起到负载均衡的作用,同时haproxy也能检测到mycat是否存活,haproxy只会将请求转发到存活的mycat上。

4、如果一台服务器(keepalived+haproxy服务器)宕机,另外一台上的keepalived会立刻抢占vip并接管服务。如果一台mycat服务器宕机,haporxy转发时不会转发到宕机的mycat上,所以mycat依然可用。

1、Haproxy的安装

1.1、配置haprxoy

(下载地址: http://www.haproxy.org/#down)

useradd haproxy

cd haproxy-1.4.27/

make TARGET=linux26 PREFIX=/usr/local/haproxy ARCH=x86_64

make install PREFIX=/usr/local/haproxy

cd /usr/local/haproxy

vi haproxy.cfg

增加如下内容:

global

log 127.0.0.1 local0

maxconn 4096

chroot /usr/local/haproxy

user haproxy

group haproxy

daemon

defaults

log global

option dontlognull

retries 3

option redispatch

maxconn 2000

contimeout 5000

clitimeout 50000

srvtimeout 50000

listen admin_stats 192.168.46.180:48800

stats uri /admin-status

stats auth admin:admin

mode http

option httplog

listen mycat_service 192.168.46.180:18066

mode tcp

option tcplog

option httpchk OPTIONS * HTTP/1.1\r\nHost:\ www

balance roundrobin

server mycat_161 192.168.46.161:8066 check port 48700 inter 5s rise 2 fall 3

server mycat_162 192.168.46.162:8066 check port 48700 inter 5s rise 2 fall 3

srvtimeout 20000

listen mycat_admin 192.168.46.180:19066

mode tcp

option tcplog

option httpchk OPTIONS * HTTP/1.1\r\nHost:\ www

balance roundrobin

server mycat_161 192.168.46.161:9066 check port 48700 inter 5s rise 2 fall 3

server mycat_162 192.168.46.162:9066 check port 48700 inter 5s rise 2 fall 3

srvtimeout 20000

1.2、配置haproxy记录日志功能

yum install rsyslog -y

cd /etc/rsyslog.d/

vi haproxy.conf

增加内容:

$ModLoad imudp

$UDPServerRun 514

local0.* /var/log/haproxy.log

vi /etc/rsyslog.conf

在#### RULES ####上面一行加入以下内容

# Include all config files in /etc/rsyslog.d/

$IncludeConfig /etc/rsyslog.d/*.conf

在local7.* /var/log/boot.log下面加入以下内容

local0.* /var/log/haproxy.log

重启rsyslog服务并将rsyslog加入自动启动服务

service rsyslog restart

chkconfig --add rsyslog

chkconfig --level 2345 rsyslog on

1.3 配置监听mycat是否存活(在Mycat server1 Mycat server2上都需要添加检测端口48700的脚本,为此需要用到xinetd,xinetd为linux系统的基础服务)

yum install xinetd -y

cd /etc/xinetd.d

vi mycat_status

增加内容:

service mycat_status

{

flags = REUSE

socket_type = stream

port = 48700

wait = no

user = nobody

server = /usr/local/bin/mycat_status

log_on_failure += USERID

disable = no

}

vi /usr/local/bin/mycat_status (创建xinetd启动服务脚本)

增加内容:

#!/bin/bash

#/usr/local/bin/mycat_status.sh

# This script checks if a mycat server is healthy running on localhost. It will

# return:

#

# "HTTP/1.x 200 OK\r" (if mycat is running smoothly)

#

# "HTTP/1.x 503 Internal Server Error\r" (else)

mycat=`/usr/local/mycat/bin/mycat status |grep ‘not running‘| wc -l`

if [ "$mycat" = "0" ];

then

/bin/echo -e "HTTP/1.1 200 OK\r\n"

else

/bin/echo -e "HTTP/1.1 503 Service Unavailable\r\n"

fi

修改脚本文件权限

chmod 777 /usr/local/bin/mycat_status

chmod 777 /etc/xinetd.d/mycat_status

将启动脚本加入服务

vi /etc/services

末尾增加:

mycat_status 48700/tcp # mycat_status

重启xinetd服务并将xinetd加入自启动服务

service xinetd restart

chkconfig --add xinetd

chkconfig --level 2345 xinetd on

验证mycat_status服务是否启动成功

netstat -antup|grep 48700

1.4、创建haproxy启停脚本

1.4.1、启动脚本

vi  /usr/local/haproxy/sbin/start

增加内容:

#!/bin/sh

/usr/local/haproxy/sbin/haproxy -f /usr/local/haproxy/haproxy.cfg &

增加权限:

chmod +x /usr/local/haproxy/sbin/start

1.4.2、关闭脚本

vi /usr/local/haproxy/sbin/stop

增加内容:

#!/bin/sh

ps -ef | grep sbin/haproxy | grep -v grep |awk ‘{print $2}‘|xargs kill -s 9

增加权限:

chmod +x /usr/local/haproxy/sbin/stop

1.4.3、授权

chown -R haproxy.haproxy /usr/local/haproxy/*

1.5、启动haproxy

启动haproxy前必须先启动keepalived,否则启动不了。

启动命令:

/usr/local/haproxy/sbin/haproxy -f /usr/local/haproxy/haproxy.cfg

启动haproxy异常情况

如果报以下错误:

[[email protected] bin]# /usr/local/haproxy/sbin/haproxy -f /usr/local/haproxy/haproxy.cfg

[ALERT] 183/115915 (12890) :Starting proxy admin_status: cannot bind socket

[ALERT] 183/115915 (12890) :Starting proxy allmycat_service: cannot bind socket

[ALERT] 183/115915 (12890) :Starting proxy allmycat_admin: cannot bind socket

原因为:该机器没有抢占到vip,如果另一台服务启动正常,这个错误可以忽略不管,如果另一台也一样,使用ping vip命令看看vip是否生效,如果没有生效,说明keepalived没有启动成功,回去检查keepalived的异常再说。启动后可以通过http://192.168.46.180:48800/admin-status (用户名密码都是admin,haproxy.cfg中配置的)

2、Keepalived安装

2.1 openssl安装

(下载地址:https://www.openssl.org/source/)

openssl必须安装,否则安装keepalived时无法编译,keepalived依赖openssl

tar -zxvf openssl-1.0.2l.tar.gz

cd openssl-1.0.2l

./config --prefix=/usr/local/openssl

./config -t

make depend

make

make test

make install

ln -s /usr/local/openssl /usr/local/ssl

vi /etc/ld.so.conf

在文件末尾加入以下内容

/usr/local/openssl/lib

修改环境变量

vi /etc/profile

在文件末尾加入以下内容

export OPENSSL=/usr/local/openssl/bin

export PATH=$PATH:$OPENSSL

source /etc/profile

安装openssl-devel

yum install openssl-devel -y

测试

ldd /usr/local/openssl/bin/openssl

linux-vdso.so.1 => (0x00007fff996b9000)

libdl.so.2 =>/lib64/libdl.so.2 (0x00000030efc00000)

libc.so.6 =>/lib64/libc.so.6 (0x00000030f0000000)

/lib64/ld-linux-x86-64.so.2 (0x00000030ef800000)

which openssl

/usr/bin/openssl

openssl version

OpenSSL 1.0.0-fips 29 Mar 2010

2.2 keepalived安装

在192.168.46.161,192.168.46.162两台机器进行keepalived安装

tar zxvf keepalived-1.2.13.tar.gz

cd keepalived-1.2.13

./configure --prefix=/usr/local/keepalived

make

make install

cp /usr/local/keepalived/sbin/keepalived /usr/sbin/

cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/

cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/

mkdir /etc/keepalived

cd /etc/keepalived/

cp /usr/local/keepalived/etc/keepalived/keepalived.conf  /etc/keepalived

mkdir -p /usr/local/keepalived/var/log

keepalived配置

建检查haproxy是否存活的脚本

mkdir /etc/keepalived/scripts

cd /etc/keepalived/scripts

vi /etc/keepalived/keepalived.conf

Master的配置:

! Configuration File for keepalived

vrrp_script chk_http_port {

script"/etc/keepalived/scripts/check_haproxy.sh"

interval 2

weight 2

}

vrrp_instance VI_1 {

state MASTER

interface eth1

virtual_router_id 51

priority 150

advert_int 1

authentication

auth_type PASS

auth_pass 1111

}

track_script {

chk_http_port

}

virtual_ipaddress {

192.168.46.180 dev eth1 scope global

}

notify_master /etc/keepalived/scripts/haproxy_master.sh

notify_backup /etc/keepalived/scripts/haproxy_backup.sh

notify_fault /etc/keepalived/scripts/haproxy_fault.sh

notify_stop /etc/keepalived/scripts/haproxy_stop.sh

}

Slave的配置:

! Configuration File for keepalived

vrrp_script chk_http_port {

script"/etc/keepalived/scripts/check_haproxy.sh"

interval 2

weight 2

}

vrrp_instance VI_1 {

state BACKUP

interface eth1

virtual_router_id 51

priority 150

advert_int 1

authentication

auth_type PASS

auth_pass 1111

}

track_script {

chk_http_port

}

virtual_ipaddress {

192.168.46.180 dev eth1 scope global

}

notify_master /etc/keepalived/scripts/haproxy_master.sh

notify_backup /etc/keepalived/scripts/haproxy_backup.sh

notify_fault /etc/keepalived/scripts/haproxy_fault.sh

notify_stop /etc/keepalived/scripts/haproxy_stop.sh

}

1. virtual_router_id 51 这个代表一个集群组,如果同一个网段还有另一组集群,请使用不同的组编号区分。如换成52、53等。

2. interface eth1 和172.17.210.103 dev eth1 scope global中的eth1指的是网卡,如果是多网卡,可能会有

eth0,eth1,eth2…,可以使用ifconfig命令查看,确保eth0是本机存在的网卡地址。有些服务器如果只有一个网卡,但被人为把eth0改成eth1了,你再写eth0就找不到了的。

vi /etc/keepalived/scripts/check_haproxy.sh

#!/bin/bash

STARTHAPROXY="/usr/local/haproxy/sbin/haproxy -f /usr/local/haproxy/haproxy.cfg"

STOPKEEPALIVED="/etc/init.d/keepalived stop"

LOGFILE="/usr/local/keepalived/var/log/keepalived-haproxy-state.log"

echo "[check_haproxy status]" >> $LOGFILE

A=`ps -C haproxy --no-header |wc -l`

echo "[check_haproxy status]" >> $LOGFILE

date >> $LOGFILE

if [ $A -eq 0 ];then

echo $STARTHAPROXY >> $LOGFILE

$STARTHAPROXY >> $LOGFILE 2>&1

sleep 5

fi

if [ `ps -C haproxy --no-header |wc -l` -eq 0 ];then

exit 0

else

exit 1

fi

vi /etc/keepalived/scripts/haproxy_master.sh

#!/bin/bash

STARTHAPROXY=`/usr/local/haproxy/sbin/haproxy -f /usr/local/haproxy/haproxy.cfg`

STOPHAPROXY=`ps -ef | grep sbin/haproxy | grep -v grep | awk ‘{print $2}‘| xargs kill -s 9`

LOGFILE="/usr/local/keepalived/var/log/keepalived-haproxy-state.log"

echo "[master]" >> $LOGFILE

date >> $LOGFILE

echo "Being master...." >> $LOGFILE 2>&1

echo "stop haproxy...." >> $LOGFILE 2>&1

$STOPHAPROXY >> $LOGFILE 2>&1

echo "start haproxy...." >> $LOGFILE 2>&1

$STARTHAPROXY >> $LOGFILE 2>&1

echo "haproxy stared ..." >> $LOGFILE

vi /etc/keepalived/scripts/haproxy_backup.sh

#!/bin/bash

STARTHAPROXY=`/usr/local/haproxy/sbin/haproxy -f /usr/local/haproxy/haproxy.cfg`

STOPHAPROXY=`ps -ef | grep sbin/haproxy | grep -v grep | awk ‘{print $2}‘| xargs kill -s 9`

LOGFILE="/usr/local/keepalived/var/log/keepalived-haproxy-state.log"

echo "[backup]" >> $LOGFILE

date >> $LOGFILE

echo "Being backup...." >> $LOGFILE 2>&1

echo "stop haproxy...." >> $LOGFILE 2>&1

$STOPHAPROXY >> $LOGFILE 2>&1

echo "start haproxy...." >> $LOGFILE 2>&1

$STARTHAPROXY >> $LOGFILE 2>&1

echo "haproxy stared ..." >> $LOGFILE

vi /etc/keepalived/scripts/haproxy_fault.sh

#!/bin/bash

LOGFILE=/usr/local/keepalived/var/log/keepalived-haproxy-state.log

echo "[fault]" >> $LOGFILE

date >> $LOGFILE

vi /etc/keepalived/scripts/haproxy_stop.sh

#!/bin/bash

LOGFILE=/usr/local/keepalived/var/log/keepalived-haproxy-state.log

echo "[stop]" >> $LOGFILE

date >> $LOGFILE

赋予脚本可执行权限

chmod 777 /etc/keepalived/scripts/*

将keepalived加入自启动服务并启动

chkconfig --add keepalived

chkconfig --level 2345 keepalived on

service keepalived start

3、搭建完成

表明搭建完成!

时间: 2024-11-05 00:21:14

Mycat高可用集群搭建的相关文章

linux 下heartbeat简单高可用集群搭建

Heartbeat 项目是 Linux-HA 工程的一个组成部分,它实现了一个高可用集群系统.通过Heartbeat我们可以实现双机热备,以实现服务的持续性. linux下基于heartbeat的简单web服务的高可用集群搭建 首先规划好两台主机作为heartbeat的双机热备,命名为node1.lvni.cc(主) ;node2.lvni.cc, node1的eth0IP :192.168.157.148  Vip eth0:0:192.168.157.149 node2的eth0IP :19

MySQL主从复制、读写分离、高可用集群搭建

MySQL主从复制.读写分离.高可用集群搭建  一.服务介绍   1.1 Keepalived     Keepalived,见名知意,即保持存活,其目的是解决单点故障,当一台服务器宕机或者故障时自动切换到其他的服务器中.Keepalived是基于VRRP协议实现的.VRRP协议是用于实现路由器冗余的协议,VRRP协议将两台或多台路由器设备虚拟成虚拟设备,可以对外提供虚拟路由器IP(一个或多个),即漂移IP(VIP). 1.2 ProxySQL ProxySQL是一个高性能,高可用性的MySQL

Flume 学习笔记之 Flume NG高可用集群搭建

Flume NG高可用集群搭建: 架构总图: 架构分配: 角色 Host 端口 agent1 hadoop3 52020 collector1 hadoop1 52020 collector2 hadoop2 52020 agent1配置(flume-client.conf): #agent1 name agent1.channels = c1 agent1.sources = r1 agent1.sinks = k1 k2 #set gruop agent1.sinkgroups = g1 #

MHA 高可用集群搭建(二)

MHA 高可用集群搭建安装scp远程控制http://www.cnblogs.com/kevingrace/p/5662839.html yum install openssh-clients mysql5.7运行环境:centos6.51 主机部署 manager:192.168.133.141test1: 192.168.133.138test2:192.168.133.139 (为master1的备用)test3: 192.168.133.140 test1为主,test2和test3为备

CentOS7/RHEL7 pacemaker+corosync高可用集群搭建

目录 一.集群信息... 4 二.集群搭建... 4 1.制作软件源... 4 2.主机基础配置... 5 3.集群基础环境准备... 7 4.集群资源准备... 9 5.资源配置... 11 6.constraint配置... 12 7.stonith配置... 13 8.集群功能验证... 14 三.集群常用命令及作用... 17 1.验证群集安装... 17 2.查看群集资源... 17 3.使用群集脚本... 17 4.STONITH 设备操作... 17 5.查看群集配置... 17

LVS+Keepalived+Nginx+Tomcat高可用集群搭建(转)

LVS+Keepalived+Nginx+Tomcat高可用集群搭建 最近公司重整架构,十几台服务器而且还要尽可能节约成本.我就谷歌了一下在几种集群之前进行比较最终采用了Keepalived+Nginx做负债均衡高可用.虽然之前也研究过集群,看过很多集群方面的原理和架构,但毕竟没有真正操作过以下案例是在虚拟机中完成实验其实对于高可用搭建来说只用给出Keepalived和Nginx的配置即可后面的一些安装搭建完全是本人项目需要可以不用理睬仅供参考. 本文只是实验搭建记录方便以后在服务器中实施搭建.

Redis安装、主从配置及两种高可用集群搭建

Redis安装.主从配置及两种高可用集群搭建 一.            准备 Kali Linux虚拟机 三台:192.168.154.129.192.168.154.130.192.168.154.131 用户名/密码:root/... ssh设置 修改sshd_config文件,命令为:vim /etc/ssh/sshd_config 将#PasswordAuthentication no的注释去掉,并且将NO修改为YES //kali中默认是yes 将PermitRootLogin wi

heartbeat v1(CRM)+DRBD实现数据库服务器高可用集群搭建

一. 方案简介 本方案采用Heartbeat双机热备软件来保证数据库的高稳定性和连续性,数据的一致性由DRBD这个工具来保证.默认情况下只有一台mysql在工作,当主mysql服务器出现问题后,系统将自动切换到备机上继续提供服务,当主数据库修复完毕,又将服务切回继续由主mysql提供服务. 二. 方案优缺点 优点:安全性高.稳定性高.可用性高,出现故障自动切换, 缺点:只有一台服务器提供服务,成本相对较高.不方便扩展.可能会发生脑裂. 三. 方案架构图 四.  方案适用场景 本方案适用于数据库访

LVS高可用集群搭建

最近公司重整架构,前端使用LVS做负债均衡,虽然之前也研究过集群,看过很多LVS原理和架构,但毕竟没有真正操作过,以下案例是在虚拟机中完成实验,记录一下,方便以后在服务器中实施搭建.  架构图如下: 前提介绍:本案例采用Centos7+Keepalived1.3.5+Tomcat9+Mysql5.6+Redis3.2.8+Rabbitmq3.6.10 集群实现的功能有: 1):实现单点访问,利用keepalived的vip实现对不同的后端服务器进行访问: 2):健康检查,利用keepalived