corosync+pacemaker+drbd实现web服务高可用

一:实验环境

   节点      OS      IP  DRBD_IP  DRBD用硬盘     VIP
web1 centos 5.10 192.168.10.11 172.16.1.1 /dev/sdb
192.168.10.100

web2 centos 5.10 192.168.10.12 172.16.1.2 /dev/sdb

注:

1.其中两节点IP地址已按上图设置好

2.两节点已配置相互ssh信任,并已做了时间同步

二:安装相关软件(节点1和2都安装)

1.安装corosync、pacemaker

到下面的地址下载对应的rpm包

http://clusterlabs.org/rpm/epel-5/x86_64/

本次下载的rpm包为:

cluster-glue-1.0.6-1.6.el5.x86_64.rpm

cluster-glue-libs-1.0.6-1.6.el5.x86_64.rpm

corosync-1.2.7-1.1.el5.x86_64.rpm

corosynclib-1.2.7-1.1.el5.x86_64.rpm

heartbeat-3.0.3-2.3.el5.x86_64.rpm

heartbeat-libs-3.0.3-2.3.el5.x86_64.rpm

libesmtp-1.0.4-5.el5.x86_64.rpm                          //这个包在epel源中下载

pacemaker-1.0.12-1.el5.centos.x86_64.rpm

pacemaker-libs-1.0.12-1.el5.centos.x86_64.rpm

resource-agents-1.0.4-1.1.el5.x86_64.rpm

安装:

[[email protected] ~]# for i in 1 2; do ssh web$i yum -y --nogpgcheck localinstall /root/*.rpm; done

3.安装apache并设置不随机启动

[[email protected] ~]# for i in 1 2; do ssh web$i yum -y install httpd; done

[[email protected] ~]# for i in 1 2; do ssh web$i chkconfig httpd off; done

4.安装drbd并设置不随机启动

[[email protected] ~]# for i in 1 2; do ssh web$i yum -y install drbd83 kmod-drbd83; done

[[email protected] ~]# for i in 1 2; do ssh web$i chkconfig drbd off; done

三:配置drbd

1. [[email protected] ~]# cp /usr/share/doc/drbd83-8.3.15/drbd.conf /etc/drbd.conf

2. [[email protected] ~]# cd /etc/drbd.d/

3.[[email protected] drbd.d]# cat global_common.conf      //这里只列出更改部分,默认未列出

global {

usage-count no;

}

common {

protocol C;

net {

cram-hmac-alg sha1;

shared-secret"wjcaiyf";

}

syncer {

rate 56M;

}

}

4.新建资源r0

[[email protected] drbd.d]# touch r0.res

[[email protected] drbd.d]# cat r0.res

resource r0 {

device /dev/drbd0;

disk /dev/sdb;

meta-disk internal;

on web1 {

address 172.16.1.1:7789;

}

on web2 {

address 172.16.1.2:7789;

}

}

5.复制global_common.conf 和r0.res到web2对应目录下

[[email protected] drbd.d]# scp global_common.conf r0.res web2:/etc/drbd.d/

6.为资源r0创建元数据(节点1和2都创建)

[[email protected] ~]# for i in 1 2; do ssh web$i drbdadm create-md r0; done

7.启动drbd服务

[[email protected] ~]# /etc/init.d/drbd start

[[email protected] ~]# /etc/init.d/drbd start

8.设置web1为主节点开始同步数据

[[email protected] ~]# drbdadm -- --overwrite-data-of-peer primary r0

9.查看/proc/drbd以确定同步状态

[[email protected] ~]# cat /proc/drbd

version: 8.3.15 (api:88/proto:86-97)

GIT-hash:0ce4d235fc02b5c53c1c52c53433d11a694eab8cbuild by [email protected], 2013-03-27 16:01:26

0:cs:Connected ro:Primary/Secondaryds:UpToDate/UpToDate C r-----

ns:4 nr:8 dw:12 dr:17 al:0 bm:1 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0

由以上红色字体可知同步完成

10.创建ext3文件系统

[[email protected] ~]# mkfs.ext3 /dev/drbd0

11.挂载/dev/drbd0到/var/www/html目录下,并放置一个测试网页,然后再卸载该挂载

[[email protected] ~]# mount /dev/drbd0 /var/www/html/

[[email protected] ~]# echo "This is a testPage" >/var/www/html/index.html

[[email protected] ~]# umount /var/www/html/

12.恢复drbd状态为secondary/secondary,并停止drbd服务

[[email protected] ~]# drbdadm secondary r0

[[email protected] ~]# for i in 1 2; do ssh web$i /etc/init.d/drbd stop; done

至此drbd配置完成

四:配置corosync

1.[[email protected] ~]# cd /etc/corosync/

2.[[email protected] corosync]# cp corosync.conf.example corosync.conf

3. 完成后的配置文件如下所示:

[[email protected] corosync]# cat corosync.conf

# Please read the corosync.conf.5 manualpage

compatibility: whitetank

totem {

version: 2

secauth: off

threads: 0

interface {

ringnumber: 0

bindnetaddr: 192.168.10.0

mcastaddr: 226.94.1.1

mcastport: 5405

ttl: 1

}

}

logging {

fileline: off

to_stderr: no

to_logfile: yes

to_syslog: no

logfile: /var/log/cluster/corosync.log

debug: off

timestamp: on

logger_subsys {

subsys: AMF

debug: off

}

}

amf {

mode: disabled

}

#

# 以下为添加部分

service {

ver: 0

name: pacemaker

}

4.复制配置文件到web2上

[[email protected] corosync]# scp corosync.conf web2:/etc/corosync/

5.创建/var/log/cluster目录并启动corosync服务

[[email protected] ~]# for i in 1 2; do ssh web$i mkdir /var/log/cluster; done

[[email protected] ~]# /etc/init.d/corosync start

Starting Corosync Cluster Engine(corosync):               [  OK  ]

[[email protected] ~]# ssh web2 /etc/init.d/corosync start

Starting Corosync Cluster Engine(corosync): [  OK  ]

6.设置corosync随机启动

[[email protected] ~]# for i in 1 2; do ssh web$i chkconfig corosync on; done

四:集群服务配置

1.查看目前的集群状态

[[email protected] ~]# crm status

Last updated: Tue Jun 23 15:28:58 2015

Last change: Tue Jun 23 15:23:58 2015 viacrmd on web1

Stack: classic openais (with plugin)

Current DC: web1 - partition with quorum

Version: 1.1.10-14.el6-368c726

2 Webs configured, 2 expected votes

0 Resources configured

Online: [ web1 web2 ]

由以上可知,节点1和2都在线,但还未配置任何资源

2.设置集群属性

[[email protected] ~]# crm configure

crm(live)configure# property stonith-enabled=false

crm(live)configure# property no-quorum-policy=ignore

crm(live)configure# verify

crm(live)configure# commit

crm(live)configure# show

node web1

node web2

property cib-bootstrap-options: \

dc-version=1.1.10-14.el6-368c726 \

cluster-infrastructure="classic openais (with plugin)" \

expected-quorum-votes=2 \

stonith-enabled=false \

no-quorum-policy=ignore

3.添加一个名为webdrbd的drbd资源

crm(live)configure# primitive webdrbd ocf:linbit:drbd params \

> drbd_resource=r0 \

> op start timeout=240 \

> op stop timeout=100 \

> op monitor role=Master timeout=20 interval=20 \

> op monitor role=Slave timeout=20 interval=10

crm(live)configure# verify

4.接着添加一个名为ms_webdrbd的主备资源

crm(live)configure# ms ms_webdrbd webdrbd \

> meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true

crm(live)configure# verify

5.再接着添加一个位置约束,ms_webdrbd更倾向于运行在web1上

crm(live)configure# location ms_webdrbd_prefers_web1 ms_webdrbd 50: web1

crm(live)configure# verify

现在提交

crm(live)configure# commit

返回到上一级,查看目前集群状态

crm(live)configure# cd

crm(live)# status

============

Last updated: Thu Jun 25 15:54:40 2015

Stack: openais

Current DC: web1 - partition with quorum

Version: 1.0.12-unknown

2 Nodes configured, 2 expected votes

1 Resources configured.

============

Online: [ web1 web2 ]

Master/Slave Set: ms_webdrbd

Masters: [ web1 ]

Slaves: [ web2 ]

由以上可知,ms_webdrbd已启动,并且Master为web1,Slave为web2,和前面设置的优先运行在web1上相符

6.添加一个名为webstore的文件系统资源并设置webstore必须和ms_webdrbd角色为Master的节点在一起,等到ms_webdrbd提升完毕Master的角色后才能启动webstore

crm(live)configure# primitive webstore ocf:heartbeat:Filesystem params \

> device=/dev/drbd0 directory=/var/www/html fstype=ext3 \

> op start timeout=60 \

> op stop timeout=60

crm(live)configure# verify

crm(live)configure# colocation webstore_with_ms_webdrbd_Master inf: webstore ms_webdrbd:Master

crm(live)configure# verify

crm(live)configure# order ms_webdrbd_before_webstore mandatory: ms_webdrbd:promote webstore:start

crm(live)configure# verify

crm(live)configure# commit

crm(live)configure# cd

crm(live)# status

============

Last updated: Thu Jun 25 16:01:35 2015

Stack: openais

Current DC: web1 - partition with quorum

Version: 1.0.12-unknown

2 Nodes configured, 2 expected votes

2 Resources configured.

============

Online: [ web1 web2 ]

Master/Slave Set: ms_webdrbd

Masters: [ web1 ]

Slaves: [ web2 ]

webstore      (ocf::heartbeat:Filesystem):   Started  web1

7.添加httpd服务资源并设置httpd服务必须和webstore在一起,webstore必须先启动后,httpd服务才能启动

crm(live)configure# primitive httpd lsb:httpd

crm(live)configure# colocation httpd_with_httpd inf: httpd webstore

crm(live)configure# order webstore_before_httpd mandatory: webstore:start httpd:start

crm(live)configure# verify

crm(live)configure# commit

crm(live)configure# cd

crm(live)# status

============

Last updated: Thu Jun 25 16:04:54 2015

Stack: openais

Current DC: web1 - partition with quorum

Version: 1.0.12-unknown

2 Nodes configured, 2 expected votes

3 Resources configured.

============

Online: [ web1 web2 ]

Master/Slave Set: ms_webdrbd

Masters: [ web1 ]

Slaves: [ web2 ]

webstore      (ocf::heartbeat:Filesystem):   Started web1

httpd (lsb:httpd):    Started web1

5.添加虚拟IP资源,并设置虚拟IP必须和httpd服务在一起,httpd服务启动后,才能启动虚拟IP

crm(live)configure# primitive webip ocf:heartbeat:IPaddr params \

> ip=192.168.10.100

crm(live)configure# colocation webip_with_httpd inf: webip httpd

crm(live)configure# order httpd_before_webip mandatory: httpd webip

crm(live)configure# verify

crm(live)configure# commit

crm(live)configure# cd

crm(live)# status

============

Last updated: Thu Jun 25 16:21:09 2015

Stack: openais

Current DC: web1 - partition with quorum

Version: 1.0.12-unknown

2 Nodes configured, 2 expected votes

4 Resources configured.

============

Online: [ web1 web2 ]

Master/Slave Set: ms_webdrbd

Masters: [ web1 ]

Slaves: [ web2 ]

webstore      (ocf::heartbeat:Filesystem):   Started web1

httpd (lsb:httpd):    Started web1

webip (ocf::heartbeat:IPaddr):       Started web1

五:高可用测试

1.使web1离线后,查看集群状态

[[email protected] ~]# crm node standby

[[email protected] ~]# crm status

============

Last updated: Thu Jun 25 16:22:21 2015

Stack: openais

Current DC: web1 - partition with quorum

Version: 1.0.12-unknown

2 Nodes configured, 2 expected votes

4 Resources configured.

============

Node web1: standby

Online: [ web2 ]

Master/Slave Set: ms_webdrbd

Masters: [ web2 ]

Stopped: [ webdrbd:0 ]

webstore      (ocf::heartbeat:Filesystem):   Started web2

httpd (lsb:httpd):    Started web2

webip (ocf::heartbeat:IPaddr):       Started web2

由以上可知,资源切换到了web2上

2.使web1重新上线(稍等片刻资源又会回到web1上)

[[email protected] ~]# crm node online

[[email protected] ~]# crm status

============

Last updated: Thu Jun 25 16:24:19 2015

Stack: openais

Current DC: web1 - partition with quorum

Version: 1.0.12-unknown

2 Nodes configured, 2 expected votes

4 Resources configured.

============

Online: [ web1 web2 ]

Master/Slave Set: ms_webdrbd

Masters: [ web1 ]

Slaves: [ web2 ]

webstore      (ocf::heartbeat:Filesystem):   Started web1

httpd (lsb:httpd):    Started web1

webip (ocf::heartbeat:IPaddr):       Started web1

由以上可知,资源又回到了web1,这和我们设置的优先运行在web1上相符

并且在节点离线和上线的过程中,不断访问http://192.168.10.100,都可以看到“This is a test Page”

至此corosync+pacemaker+drbd的apache高可用配置完成

时间: 2024-10-06 20:57:55

corosync+pacemaker+drbd实现web服务高可用的相关文章

Corosync+Pacemaker+DRBD+MySQL 实现MySQL高可用

一:Corosync+Pacemaker Pacemaker是最流行的CRM(集群资源管理器),是从heartbeat v3中独立出来的资源管理器,同时Corosync+Pacemaker是最流行的高可用集群的套件. 二:DRBD DRBD (Distributed Replicated Block Device,分布式复制块设备)是由内核模块和相关脚本而构成,用以构建高可用性的集群.其实现方式是通过网络来镜像整个设备.你可以把它看作是一种网络RAID1. 三:试验拓扑图 四:试验环境准备(ce

Linux HA集群之Corosync + Pacemaker + DRBD + MySQL实现MySQL高可用

大纲 一.系统环境及所需软件包 二.高可用环境准备工作 三.DRBD的安装与基本配置 四.Corosync的安装与基本配置 五.基于crm配置资源 一.系统环境及所需软件包 系统环境 CentOS5.8 x86_64 node1.network.com    node1    172.16.1.101 node2.network.com    node2    172.16.1.105 软件包 corosync-1.2.7-1.1.el5.x86_64.rpm pacemaker-1.0.12-

Corosync+Pacemaker+DRBD实现MariaDB的高可用集群

Corosync简介 Corosync是高可用集群中基础事务层 (Messaging Layer)的一个实现方案与heartbeat的功能类似,主要用来传递集群的事务信息,但是Corosync的功能更加强大,正在逐渐地取代heartbeat.Corosync在传递信息的时候可以通过一个简单的配置文件来定义信息传递的方式和协议等. Pacemaker简介 Pacemaker是一个集群资源管理器,从heartbeat v3版本中分裂出来,功能强大.它利用集群事务层提供的组件对各节点进行资源管理及监控

corosync+pacemaker+drbd+mysql实现mysql高可用

#################################################################                       服务器架构                         ##    ############################################################服务器:vm_test1:        网卡: eth0 192.168.1.213 用于外网通信        网卡: eth1

高可用集群技术之heartbeat+NFS实现web服务高可用(文本方式配置接口--gui图形配置)

一.高可用集群基本概念   什么是高可用技术呢?在生产环境中我既要保证服务不间断的服务又要保证服务器稳定不down机,但是异常还是会发生,比如说:服务器硬件损坏...导致服务器down机,我该如何保证服务器down机后继续提供服务呢?这时我就应该请出高可用技术来帮忙了,当我们的服务器发生故障后不能继续时,高可用集群技术解决将业务及服务自动转移至其他主机服务器上继续服务,保证服务架构不间断运行. 高可用集群的架构层次: 后端主机层: 这一层主要是正在运行在物理主机上的服务. 2.Message l

Heartbeat实现web服务高可用(三)

六:Heartbeat实现WEB服务高可用案例 6.1 部署准备 资源环境:继续使用我们之前已经部署好Heartbeat的两台服务器node01.cn和node02.cn,两台机器heartbeat是双主模式我们再捋一捋    node01.cn   eth0 172.10.25.26 外网管理IP                      eth1 10.25.25.16  心跳直连                      VIP  172.10.25.18        node02.cn

HAProxy+KeepAlived实现web服务高可用、动静分离等

大致规划: 主机 IP 描述 VIP 192.168.0.222 对外提供高可用IP haproxy+keepalived (node1) 192.168.0.111 haproxy为后端两台WEB服务的做动静分离:keepalived为haproxy做高可用. haproxy+keepalived (node2) 192.168.0.112 WEB                (node3) 192.168.0.113 提供静态请求响应 Apache+PHP+MySQL   (node4)

corosync+pacemaker实现web服务高可用

前提: 本配置共有两个测试节点,分别node1和node2,相的IP地址分别为202.207.178.6和202.207.178.7 (为避免影响,先关闭防火墙和SElinux) 一.安装配置corosync及相关软件包 1.准备工作 1)节点名称必须跟uname -n命令的执行结果一致 node1: # hostname node1 # vim /etc/sysconfig/network HOSTNAME=node1 node2: # hostname node2 # vim /etc/sy

Heartbeat实现web服务高可用

在之前的部署过程中,成功的部署并配置了heartbeat,而且也能实现两节点之间VIP的高可用,这里来配置并实现httpd服务的高可用. 一.安装Apache 两台heartbeat节点服务器都先停止heartbeat服务. /etc/init.d/heartbeat stop 分别在heartbeat01和heartbeat02上安装httpd服务 yum -y install httpd 分别在两个节点上执行 echo $HOSTNAME >>/var/www/html/index.htm