HA Cluster实现方案:
vrrp协议的实现
keepalived
ais: 完备HA集群
RHCS(cman)
heartbeat
corosync + pacemaker :corosync是集群框架引擎程序,pacemaker是集群资源管理器,crmsh是pacemaker命令行管理工具
stonish : shooting the other node in the header 节点爆头,比如硬件设备方面的电源交换机,代理调度器节点都可以向其发送信号切断故障一方的电源,从而使得明确退出服务,而不会因判定失误导致竞争,导致集群崩溃
keepalived : HA集群软件实现,为 ipvs 而生
vrrp协议:Virtual Redundant Routing Protocol
术语:
虚拟路由器: Virtual Router
虚拟路由器标识:VRID(0-255)
物理路由器:
master : 主设备
backup: 备用设备
priority : 优先级
VIP :Virtual IP
VMAC : Virtual MAC
GraciousARP
通告:心跳,优先级等; 周期性;
抢占式,非抢占式;
安全工作:
认证:
无认证、简单字符认证、MD5
工作模式:
主/备:单虚拟路径器;
主/主 :主/备(虚拟路径器1) , 备/主(虚拟路径器2)
keealived:
vrrp协议的软件实现,原生设计的目的为了高可用ipvs服务
基于vrrp协议完成地址流动;
为vip地址所在的节点生成ipvs规则(在配置文件中预先定义)
为ipvs集群的各个RS做健康状态监测:
基于脚本调用接口通过执行脚本完成脚本中定义的功能,进而影响集群事物;
组件:
核心组件:
vrrp stack
checkers
ipvs wrapper
控制组件:配置文件分析器
IO复用器
内存管理组件
HA Cluster的配置前提:
(1)各个节点时间必须同步:
ntp, chrony :注意使用vim /etc/chrony.conf,可以比ntpdate时间精度更好,并且时效更快 systemctl restart chronyd.service
(2) 确保iptables以及selinux不会成为阻碍;
(3) 各个节点之间可通过主机名互相通信(对KA并非必须)
建议使用/etc/hosts文件实现; ip node1.com node1...;
exec bash --> 重新启动一个bash进程取代之前的进程,从而实现环境配置文件重新加载生效;
(4) 确保各个节点的用于集群服务的接口支持MULTICAST通信; D类IP 224-239;
(5) 各个集群节点之间ssh基于key登陆实现
ssh-keygen -t rsa -N ‘‘;
ssh-copy-id -i .ssh/id_rsa_pub [email protected](本机拷贝后实现自己连自己也不需要密码), scp -rp .ssh/ [email protected]:root/
keepalived安装配置:
centos7.4 随base仓库提供:
程序环境:
主配置文件: /etc/keepalived/keepalived.conf
主程序文件: /usr/sbin/keepalived
Unit File : keepalived.service
Unit File的环境配置文件: /etc/sysconfig/keepalived
配置文件组件部分:
TOP HIERACHY
GLOBAL CONFIGURATION
Global definitions
Static routes/address
VRRPD CONFIGURATION
vrrp synchronization group(s) :vrrp 同步组
vrrp instance(s) : 每个vrrp instance即一个vrrp路由器;
LVS CONFIGURATION
Virtual server group
Virtual server :ipvs集群的vs和rs;
单主配置示例:
!Configuration File for keepalived
global_defs {
notification_email {
[email protected]
}
notification_emali from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node1
vrrp_mcast_group4 224.0.100.19
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 14
priority 98
advert_int 1
authentication {
auth_type PASS
auth_pass 2525fs
}
virtual_ipaddress {
172.18.0.100/16 dev ens33
# 172.18.0.101/16 dev ens33 label ens33:1
}
track_interface {
ens33
ens34
}
}
配置要监控的网络接口,一旦接口出现故障,则转为FAULT状态;
nopreempt: 定义工作模式为 非抢占式模式;
preempt_delay 300 :抢占式模式下,节点上线后触发新选举操作的延迟时长;
定义通知脚本:
notify_master <STRING> 传递的参数字符;当前节点成为主节点时触发的脚本;
notify_backup <STRING> :当前节点转为备用节点时触发的脚本;
notify_fault <STRING> : 当前节点转为失败状态时触发脚本;
notify <STRING> :通用格式的通知触发机制,一个脚本可以完成以上三种状态的转换时的通知;
双主模型示例:
! Configuration File for keepalived
global_defs {
notification_email {
[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node1
vrrp_mcast_group4 224.0.100.19 :使用ipv4地址组播,如一主多备模式,通告检测心跳线检测,一个vrrp虚拟路由器所在网络中的集群中,有可能只有少数服务器是负载均衡集群服务器,使用特定的组播地址可以使得心跳线检测时广播指定组中的集群服务器,从而不影响其他主机
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 11
priority 100
adver_int 1
authentication {
auth_type PASS
auth_pass 4234sdf
}
virtual_ipaddress {
172.18.0.100/16 dev ens33
}
}
vrrp_instance VI_2 {
state BACKUP
interface ens33
virtual_router_id 12
priority_router_id 98
advert_int 1
authentication {
auth_type PASS
auth_pass sg1234
}
virtual_ipaddress {
172.18.0.101/16 dev ens33 label ens33:0
}
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
}
通知脚本的使用方式:
示例通知脚本:
#/bin/bash
#
contact=‘[email protected]‘
notify() {
local mailsubject="$(hostname) to be $1,vip floating"
local mailbody="$(date + ‘%F %T‘)" : vrrp transiton, $(hostname) changed to be $1
echo "$mailbody" | mail -s "$mailsubject" $contact
}
case $1 in
master)
notify master;;
backup)
notify backup
systemctl restart nginx #借助/etc/keepalived/notify.sh监控状态检测脚本当检测当前高可用节点为backup状态时,执行简单重启修复操作
;;
fault)
notify fault
;;
*)
echo "Usage: $(basename $0) {master|backup|fault}"
exit 1;;
esac
脚本调用方法:
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
高可用的ipvs集群示例:
! Configuration File for keepalived
global_defs {
notification_email {
[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node1
vrrp_mcast_group4 224.0.100.19
}
vrrp_instance VI_1 {
state MASTER
interface eno16777736
virtual_router_id 14
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 571f97b2
}
virtual_ipaddress {
10.1.0.93/16 dev eno16777736
}
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
}
*虚拟服务器配置
virtual_server 172.18.0.100 80 {
delay_loop 3
lb_algo rr
lb_kind DR
protocol TCP
sorry_server 127.0.0.1 80
real_server 10.1.0.69 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 1
nb_get_retry 3
delay_before_retry 1
}
}
}
可单独定义块
TCP_CHECK {
nb_get_retry 3
delay_before_retry 2
connect_timeout 3
}
keepalived调用外部的辅助脚本进行资源监控,并根据监控的结果状态能实现优先级的动态调整:
分两步: (1) 先定义一个脚本;(2) 调用此脚本;
vrrp_script chk_down{
script "killall -0 nginx && exit 0 || exit1"
interval 1
weight -5 *脚本监测健康状态失败则priority减5降级
fall 2
rise 1
}
自我实验与总结:
Nginx + Keepalived 搭建高可用负载均衡集群
1.环境规划:
主机: Ip地址 http端口
nginx_master 172.18.252.221 ; 16915、16916
nginx_slave 172.18.252.222
tomcat_server_1 172.18.252.223
tomcat_server_2 172.18.252.224
tomcat_server_3 172.18.252.225
nginx_master VIP : 172.18.252.230
1.操作系统版本:CentOS6.5 x86_64
2.内核版本 :2.6.32-504.el6.x86_64
3.nginx版本 : nginx-1.8.0-1.el6.ngx.x86_64
4.keepalived版本 :keepalived-1.2.19
前端双nginx+keepalived,nginx反向代理到后端tomcat集群实现负载均衡,keepalived实现集群高可用,主nginx故障后虚拟IP自动漂移到备用nginx服务器
后端tomcat每个主机都开启两个端口提供业务:16915,16916
二、安装
前端两台主机分别安装nginx和keepalived
1) 编译安装keepalived
#安装依赖
yum install kernel-* gcc make openssl-*
#下载keepalived-1.2.19.tar.gz
wget http://www.keepalived.org/software/keepalived-1.2.19.tar.gz
#解压
tar xvzf keepalived-1.2.19.tar.gz
cd keepalived-1.2.19
#配置
./configure --sysconfdir=/etc --with-kernel-dir=--with-kernel-dir=/usr/src/kernels/2.6.32-504.el6.x86_64
#编译并安装
make -j 2 && make install
#查看keepalived版本,验证安装成功
keepalived -v
#设置开机自启动
chkconfig keepalived on
2)RPM包安装nginx
官方nginx yum 源:/etc/yum.repos.d/nginx.repo
[nginx]
name=nginx_repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
enabled=1
gpgcheck=0
yum源设置好后直接安装即可:
yum -y install nginx
chkconfig nginx on
三、配置
1)前端两台主机nginx的配置完全一样
#vim /etc/nginx/conf.d/upstream.conf
upstream tomcatclu_16915 {
server 172.18.252.223:16915;
server 172.18.252.224:16915;
server 172.18.252.225:16915;
hash $remote_addr consistent;
}
upstream tomcatclu_16916 {
server 172.18.252.223:16916;
server 172.18.252.224:16916;
server 172.18.252.225:16916;
hash $remote_addr consistent;
}
#vim /etc/nginx/conf.d/server.conf
server {
listen 16915;
server_name www.magedu.com;
location / {
proxy_pass http://tomcatclu_16915;
}
location /nginx_status {
stub_status on;
access_log off;
allow 127.0.0.1;
#要允许公司ip访问nginx status
allow 192.168.252.0/24;
deny all;
}
}
server {
listen 16916;
server_name www.magedu.com;
location / {
proxy_pass http://tomcatclu_16915;
}
location /nginx_status {
stub_status on;
access_log off;
allow 127.0.0.1;
#要允许公司Ip访问nginx status
allow 192.168.252.0/24;
deny all;
}
}
2)nginx_master的keepalived的配置
[email protected]_master ~]# vim /etc/keepalived/keepalived.conf
!Configuration File for keepalived
global_defs {
router_id nginx-ha1
}
vrrp_script check_nginx {
#检查nginx状态的脚本,文章后面给出
script "/data/script/check_nginx.sh"
#执行间隔2秒
interval 2
}
vrrp_instance VI_1 {
state MASTER
interface eth0
#同一keepalived集群的virtual_router_id必须相同,默认51
virtual_router_id 55
priority 100
advert_int 1
#不抢占:如果集群里已存在MASTER状态的主机,即使优先级高于MASTER也不抢占为MASTER,只在优先级高的主机上设置即可
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
#虚拟IP
172.18.252.230/16
}
track_script {
check_nginx
}
track_interface {
eth0
eth1
}
}
3)nginx_slave的keepalived配置
[[email protected]_slave ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id nginx-ha2
}
vrrp_script check_nginx {
script "/data/script/check_nginx.sh"
interval 2
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 55
#备的优先级低
priority 80
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.18.252.230/16
}
track_script {
check_nginx
}
track_interface {
eth0
eth1
}
}
4)防火墙设置
#iptables放行组播地址流量:
iptables -I INPUT -d 224.0.0.18 -j ACCEPT
server iptables save
VRRP报文是通过IP多播的形式发送的,组播地址224.0.0.18是VRRP报文的目的地址。
5)部署nginx状态检查脚本check_nginx.sh
/data/script/check_nginx.sh检查脚本内容如下:
#!/bin/bash
#check nginx server status
#
#nginx http 端口
PORTS="16915 16916"
functions check_ports {
for port in $PORTS;do
nc -z 127.0.0.1 $port |grep -q succeeded
[ "${PIPESTATUS[1]}" -eq 0 ] && mark=${mark}1
done
#如果mark值为空说明两个端口都不通
#如果mark等于1,说明有一个端口是通的
#如果mark等于11,说明两个端口都是通的
echo $mark
}
ret1=$(check_ports)
#如果nginx端口不通,会尝试重启一次nginx
if [ "$ret1" !="11" ];then
/sbin/service nginx stop
/sbin/service nginx start
sleep 1
ret2=$(check_ports)
#如果还是有端口不通,表示nginx服务不正常,则停掉keepalived,使VIP发生切换
[ "$ret2" != 11 ] && /etc/init.d/keepalived stop
fi
chmod +x /data/script/check_nginx.sh
补充说明:如果nginx恢复正常后,keepalived不能自动启动,需要编写一个脚本完成这项工作拉起keepalived.脚本放到cron里每分钟执行。
6)开启keepalived的日志:
编辑/etc/sysconfig/keepalived:
KEEPALIVED_OPTIONS="-D -d -S 0"
编辑/etc/rsyslog.conf:
#配置文件最后面加上下面一行
local0.* /var/log/keepalived.log
重启rsyslog:
service rsyslog restart
按上面配置后,keepalived会把日志记录到/var/log/keepalived.log
7)启动服务
#先检查nginx配置文件正确性
nginx -t
#启动nginx服务
service nginx start
#同时启动keepalived服务
service keepalived start
#过一会查看虚拟ip是否在nginx_master主机上
ip a l
四、验证
停掉主节点上的keepalived服务或者重启系统,同时不断的Ping虚拟IP,经过一个请求超时间隔,虚拟IP自动漂移到了从节点上
原文地址:http://blog.51cto.com/12947626/2124991