haproxy+pacemaker

[[email protected] ~]# yum install rpm-build 安装rpm编译工具(有可能需要安装依赖包,根据报错情况)
[[email protected] ~]# rpmbuild -tb haproxy-1.6.11.tar.gz 编译
[[email protected] ~]# cd rpmbuild/
BUILD/ BUILDROOT/ RPMS/ SOURCES/ SPECS/ SRPMS/
[[email protected] ~]# cd rpmbuild/RPMS/
[[email protected] RPMS]# ls
x86_64
[[email protected] RPMS]# cd x86_64/
[[email protected] x86_64]# rpm -qpl haproxy-1.6.11-1.x86_64.rpm 查看安装文件路径
/etc/haproxy
/etc/rc.d/init.d/haproxy
/usr/sbin/haproxy
/usr/share/doc/haproxy-1.6.11
/usr/share/doc/haproxy-1.6.11/CHANGELOG
/usr/share/doc/haproxy-1.6.11/README
/usr/share/doc/haproxy-1.6.11/architecture.txt
/usr/share/doc/haproxy-1.6.11/configuration.txt
/usr/share/doc/haproxy-1.6.11/intro.txt
/usr/share/doc/haproxy-1.6.11/management.txt
/usr/share/doc/haproxy-1.6.11/proxy-protocol.txt
/usr/share/man/man1/haproxy.1.gz
[[email protected] x86_64]# rpm -ivh haproxy-1.6.11-1.x86_64.rpm 安装
Preparing... ########################################### [100%]
1:haproxy ########################################### [100%]
[[email protected] x86_64]# cd
[[email protected] ~]# tar zxf haproxy-1.6.11.tar.gz
[[email protected] ~]# cd haproxy-1.6.11
[[email protected] haproxy-1.6.11]# ls
CHANGELOG doc include Makefile src VERDATE
contrib ebtree LICENSE README SUBVERS VERSION
CONTRIBUTING examples MAINTAINERS ROADMAP tests
[[email protected] haproxy-1.6.11]# find -name *.spec #rpmbuild可以编译是因为原文件包有这个工具
./examples/haproxy.spec
[[email protected] haproxy-1.6.11]# cd examples/
[[email protected] examples]# cp content-sw-sample.cfg /etc/haproxy/haproxy.cfg
[[email protected] examples]# cd /etc/haproxy/
[[email protected] haproxy]# ls
haproxy.cfg
[[email protected] haproxy]# grep 200 /etc/passwd
[[email protected] haproxy]# groupadd -g 200 haproxy
[[email protected] haproxy]# useradd -u 200 -g 200 -M haproxy
[[email protected] haproxy]# vim /etc/security/limits.conf
末尾添加:
haproxy - nofile 10000
[[email protected] haproxy]# vim haproxy.cfg

default_backend static

The static backend backend for ‘Host: img‘, /img and /css.

backend static
balance roundrobin
server statsrv1 172.25.135.2:80 check inter 1000
server statsrv2 172.25.135.3:80 check inter 1000
[[email protected] haproxy]# /etc/init.d/haproxy start
测试一下网址:
172.25.135.1/amin/statc
安装php实现动态
[[email protected] ~]# yum install php -y
[[email protected] ~]# vim /var/www/html/index.php
<?php
phpinfo()
?>

测试172.25.135.1/index.php
[[email protected] haproxy]# vim haproxy.cfg


[[email protected] haproxy]# yum install httpd
[[email protected] haproxy]# vim /etc/httpd/conf/httpd.conf 改端口为8080
[[email protected] haproxy]# /etc/init.d/httpd restart
测试 172.25.135.1:8080
[[email protected] haproxy]# vim haproxy.cfg


测试:172.25.135.1/index.php #自动跳到baidu
[[email protected] haproxy]# vim haproxy.cfg

在server2和3上安装php下载upload
[[email protected] ~]# mv upload/ /var/www/html/
[[email protected] html]# chmod 777 upload/
[[email protected] upload]# mv * ..
[[email protected] html]# vim upload_file.php 将图片默认大小改到合适大小
[[email protected] html]# /etc/init.d/httpd restart #server3同操作
浏览器访问:172.25.135.1/index.php

上传图片则server2无文件server3有上传文件,从而实现读写分离。
[[email protected] html]# cd upload
[[email protected] upload]# ls
[[email protected] html]# cd upload
[[email protected] upload]# ls
iso7.gif
从server1将安装包和配置文件发送到server4上

[[email protected] x86_64]# scp haproxy-1.6.11-1.x86_64.rpm [email protected]

[[email protected] haproxy]# scp haproxy.cfg [email protected]:/etc/haproxy/
[[email protected] security]# pwd
/etc/security
[[email protected] security]# scp limits.conf [email protected]:/etc/security/

[[email protected] ~]# rpm -ivh haproxy-1.6.11-1.x86_64.rpm 安装
创建用户,修改文件,开启服务和server1保持一致
[[email protected] ~]# yum install pacemaker corosync -y 安装服务

[[email protected] ~]# yum install pacemaker corosync -y 安装服务
[[email protected] ~]# vim /etc/corosync/corosync.conf
totem {
version: 2
secauth: off
threads: 0
interface {
ringnumber: 0
bindnetaddr: 172.25.135.0
mcastaddr: 226.94.1.135 根据情况,如果多用户需改同网络会冲突
mcastport: 5405
ttl: 1
}
}
末尾添加:
service {
name:pacemaker
ver:0
}
[[email protected] corosync]# scp corosync.conf server4:/etc/corosync/ 发给server4
开启server1,4的服务
[[email protected] ~]# /etc/init.d/corosync start
[email protected] ~]# yum install -y crmsh-1.2.6-0.rc2.2.1.x86_64.rpm pssh-2.3.1-2.1.x86_64.rpm 安装

[[email protected] ~]# yum install -y crmsh-1.2.6-0.rc2.2.1.x86_64.rpm pssh-2.3.1-2.1.x86_64.rpm 安装

[[email protected] ~]# crm



crm(live)# configure
crm(live)configure# show
node server1
node server4
property $id="cib-bootstrap-options" \
dc-version="1.1.10-14.el6-368c726" \
cluster-infrastructure="classic openais (with plugin)" \
expected-quorum-votes="2"
crm(live)configure# property stonith-enabled=false ##先关掉fence
crm(live)configure# commit



[[email protected] ~]# crm_verify -LV ##关掉fence后不报错
[[email protected] ~]# crm
crm(live)# configure
shoecrm(live)configure# show
node server1
node server4
property $id="cib-bootstrap-options" \
dc-version="1.1.10-14.el6-368c726" \
cluster-infrastructure="classic openais (with plugin)" \
expected-quorum-votes="2" \
stonith-enabled="false"
crm(live)configure# primitive vip ocf:heartbeat:IPaddr2 params ip=172.25.18.100 cidr_netmask=24 op monitor interval=1min
##配置vip heartbeat工具 vip netmask 健康监测时间
crm(live)configure# commit
crm(live)configure# bye

[[email protected] ~]# vim /etc/haproxy/haproxy.cfg
frontend public
bind *:80 ##监听所有的80端口(为了监听vip)
[[email protected] ~]# crm_mon ##动态监听
Last updated: Tue Apr 17 13:18:05 2018
Last change: Tue Apr 17 13:12:45 2018 via cibadmin on server4
Stack: classic openais (with plugin)
Current DC: server4 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
1 Resources configured

Online: [ server1 server4 ]

vip (ocf::heartbeat:IPaddr2): Started server1
[[email protected] ~]# crm
crm(live)# configure
crm(live)configure# show
node server1
node server4
primitive vip ocf:heartbeat:IPaddr2 \
params ip="172.25.18.100" cidr_netmask="24" \
op monitor interval="1min"
property $id="cib-bootstrap-options" \
dc-version="1.1.10-14.el6-368c726" \
cluster-infrastructure="classic openais (with plugin)" \
expected-quorum-votes="2" \
stonith-enabled="false"
crm(live)configure# property no-quorum-policy=ignore ##宕机后vip可以浮动
crm(live)configure# commit
byecrm(live)configure# bye
[[email protected] x86_64]# /etc/init.d/corosync stop
[[email protected] ~]# crm_mon ##server1转移到server4

[[email protected] ~]# cd rpmbuild/RPMS/x86_64/
[[email protected] x86_64]# scp haproxy-1.6.11-1.x86_64.rpm server4:~/
[[email protected] ~]# rpm -ivh haproxy-1.6.11-1.x86_64.rpm
[[email protected] ~]# /etc/init.d/haproxy start
[[email protected] x86_64]# scp /etc/haproxy/haproxy.cfg server4:/etc/haproxy/
[[email protected] x86_64]# /etc/init.d/haproxy stop ##启动集群管理1时一定要关闭haproxy和取消开机自启
Shutting down haproxy: [ OK ]
[[email protected] x86_64]# chkconfig --list haproxy
haproxy 0:off 1:off 2:off 3:off 4:off 5:off 6:off

[[email protected] x86_64]# crm



crm(live)# configure
crm(live)configure# show
node server1
node server4
primitive vip ocf:heartbeat:IPaddr2 \
params ip="172.25.18.100" cidr_netmask="24" \
op monitor interval="1min"
property $id="cib-bootstrap-options" \
dc-version="1.1.10-14.el6-368c726" \
cluster-infrastructure="classic openais (with plugin)" \
expected-quorum-votes="2" \
stonith-enabled="false" \
no-quorum-policy="ignore"
crm(live)configure# primitive haproxy lsb:haproxy op monitor interval=1min
crm(live)configure# commit
crm(live)configure# group lbgroup vip haproxy ##建立组,使crm_mon中vip和haproxy在同一server上
crm(live)configure# commit
crm(live)configure# bye



[[email protected] x86_64]# crm node standby ##控制下线
[[email protected] x86_64]# crm node online ##控制上线

[[email protected] x86_64]# stonith_admin -I ##检验fence

[[email protected] x86_64]# stonith_admin -M -a fence_xvm ##查看

[[email protected] Desktop]# systemctl status fence_virtd.service ##物理机上开启fence服务
拷贝/etc/cluster/fence_xvm.key 文件给server1和server4,server1和4的cluster为自建的文件夹把fencekey和物理机保持一致,不然虚拟机fence启动不了

[[email protected] cluster]# crm
crm(live)# configure
crm(live)configure# primitive vmfence stonith:fence_xvm params pcmk_host_map="server1:host1;server4:host4" op monitor interval=1min ##设置fence
crm(live)configure# commit
crm(live)configure# bye

[[email protected] cluster]# crm
crm(live)# configure
crm(live)configure# property stonith-enabled=true ##开启fence
crm(live)configure# commit
crm(live)configure# bye

[[email protected] ~]# crm
crm(live)# resource
crm(live)resource# cleanup vmfence      ##配错fence后清除更新
Cleaning up vmfence on server1
Cleaning up vmfence on server4
Waiting for 1 replies from the CRMd. OK
crm(live)resource# show
 Resource Group: lbgroup
     vip    (ocf::heartbeat:IPaddr2):   Started
     haproxy    (lsb:haproxy):  Started
 vmfence    (stonith:fence_xvm):    Started

检测:
[[email protected] ~]# echo c > /proc/sysrq-trigger ##看fence机制是否成功
echo c > /proc/sysrq-trigger

原文地址:http://blog.51cto.com/13810716/2154829

时间: 2024-08-30 13:15:15

haproxy+pacemaker的相关文章

HAproxy+keepalived/pacemaker实现高可用,负载均衡技术

HAproxy+keepalived/pacemaker Haproxy+keepalived的结合,实现负载均衡与高可用的完美结合,既解决了提供服务的负载均衡,又解决了作负载均衡器的单点故障问题.这样架构就很健壮了. 材料准备: keepalived-1.2.20.tar.gz 步骤: #tar -zxf  keepalived-1.2.20.tar.gz    :解压源码包,进行编译安装 #cd   keepalived-1.2.20 #yum  install  -y  gcc  open

openstack controller ha测试环境搭建记录(一)——操作系统准备篇

为了初步了解openstack controller ha的工作原理,搭建测试环境进行学习. 在学习该方面知识时,当前采用的操作系统版本是centos 7.1 x64.首先在ESXi中建立2台用于测试的虚机,最小化安装完成centos,配置IP分别为10.0.0.12.10.0.0.13,主机名分别为controller2.controller3. 关闭防火墙:# systemctl stop firewalld# systemctl disable firewalld 修改主机名:# host

Nginx + Keepalived(主备模式)实现负载均衡高可用浅析

概述 目前关于负载均衡和高可用的架构方案能找到相当多且详尽的资料,此篇是自己学习相关内容的一个总结,防止将来遗忘再次重新查找资料,也避免踩相同的坑. 此次配置的负载均衡与高可用架构:Nginx + Keepalived(主备模式),Nginx 使用反向代理实现七层负载均衡. 众所周知,Nginx 是一款自由的.开源的.高性能HTTP服务器和反向代理服务器,也是一个IMAP.POP3.SMTP代理服务器. 也就是说Nginx本身就可以托管网站(类似于Tomcat一样),进行HTTP服务处理,也可以

corosync和pacemaker高可用mariadb和haproxy

高可用的解决方案keepalived只是提供了最简单的高可用功能,真正高级的功能keepalived很难完成.openAIS规范提供完善的解决方案,但是很重量级很多功能考虑的很全面.很细致,了解这些我们才可以更加深入的理解高可用的完整的体系,当遇到特殊的高可用场景我们必须使用这些方案才可以解决. OpenAIS规范的解决方案 这个规范一直迭代到今天,形成的完整的体系如图1.1 图1.1 既然多个主机要组成一个集群,那么就要有一个软件帮助多个主机间实现心跳信息通告,这个实现通告的在OpenAIS规

Linux的企业-高可用集群Haproxy+corosync+pacemaker+fence

一.Haproxy简介 Haproxy是一个使用C语言编写的自由及开放源代码软件,其提供高可用性.负载均衡,以及基于TCP和HTTP的应用程序代理. HAProxy特别适用于那些负载特大的web站点,这些站点通常又需要会话保持或七层处理.HAProxy运行在当前的硬件上,完全可以支持数以万计的并发连接.并且它的运行模式使得它可以很简单安全的整合进您当前的架构中, 同时可以保护你的web服务器不被暴露到网络上. 二.HAProxy的特点1.支持两种代理模式:TCP(四层)和HTTP(七层),支持虚

pacemaker+haproxy

Pacemaker是一个集群资源管理器.它利用集群基础构件(OpenAIS .heartbeat或corosync)提供的消息和成员管理能力来探测并从节点或资源级别的故障中恢复,以实现群集服务(亦称资源)的最大可用性.俗称,心脏起搏器实验 这次实验是基于上次的haproxy实验所做的 实验环境 server6 haproxy主机 server8 http主机 server9 http主机 server4 haproxy主机 在之前的实验中,server1主机已经配置完成,保持server4与se

[转]MySQL-Galera Cluster With HAproxy

When I started working on Open Stack, I had to investigate about the HA of the nova component. Unfortunatly the nova configuration needed a single entry point to connect to the MySQL database. The solution that came to me was to use HAProxy on top of

Pacemaker DB Clones

How to add VIPs to Percona XtraDB Cluster or MHA with Pacemaker http://www.mysqlperformanceblog.com/2013/11/20/add-vips-percona-xtradb-cluster-mha-pacemaker/ It is a rather frequent problem to have to manage Virtual IP addresses (VIPs) with a Percona

openstack API部分(Keystone) HAProxy配置(二)

openstack API部分(Keystone) HAProxy配置 廖杰 一.概况与原理 1)所需要的配置组件有:pacemaker+corosync+HAProxy 2)主要原理:HAProxy作为负载均衡器,将对openstack api服务的请求分发到两个镜像的控制节点上,由于openstack api服务是无状态的服务,所以不存在数据同步的问题.具体为在pacemaker中配置一个VIP,HAProxy负责监听这个VIP,将对这个VIP的请求分发到两台控制节点上,同时HAProxy本