高可用集群pacemake

实验环境:server3:172.25.29.3
                    server4:172.25.29.4
                    server6:172.25.29.6

[[email protected] ~]#yum install -y pacemaker
[[email protected] ~]#cd /etc/corosync/
[[email protected] corosync]#cp corosync.conf.example corosync.conf
[[email protected] corosync]#vim corosync.conf

9                 ringnumber: 0
 10                 bindnetaddr: 172.25.29.0
 11                 mcastaddr: 226.94.1.1
 12                 mcastport: 540529
 13                 ttl: 1

31 amf {
32         mode: disabled
33 }
34
35 service {
36          name: pacemaker
37          ver:0
 38 }
    
[[email protected] corosync]#scp corosync.conf 172.25.29.4:/etc/corosync/
[[email protected] corosync]#/etc/init.d/corosync start    (打开集群)
[[email protected] corosync]#crm_verify -LV

下载crmsh-1.2.6-0.rc2.2.1.x86_64.rpm  pssh-2.3.1-2.1.x86_64.rpm

[[email protected]~]#yum install -y crmsh-1.2.6-0.rc2.2.1.x86_64.rpm  pssh-2.3.1-2.1.x86_64.rpm
[[email protected] corosync]#crm_verify -LV
[[email protected] corosync]#crm

crm(live)# node
crm(live)node# show
server3.example.com: normal
server4.example.com: normal
crm(live)node# cd
crm(live)# configure
crm(live)configure# show

node server3.example.com
node server4.example.com
property $id="cib-bootstrap-options" \
    dc-version="1.1.10-14.el6-368c726" \
    cluster-infrastructure="classic openais (with plugin)" \
    expected-quorum-votes="2" \

crm(live)configure# property stonith-enabled=false
crm(live)configure#commit

crm(live)configure# primitive vip ocf:heartbeat:IPaddr2 params ip="172.25.29.100" cidr_netmask="32" op monitor interval="10s"
crm(live)configure# primitive mysite ocf:heartbeat:apache params configfile=/etc/httpd/conf/httpd.conf op monitor interval=1min
crm(live)configure#commit
crm(live)configure#edit     (可以进去查看)
crm(live)configure#show

node server3.example.com
node server4.example.com
primitive mysite ocf:heartbeat:apache \
    params configfile="/etc/httpd/conf/httpd.conf" \
    op monitor interval="1min"
primitive vip ocf:heartbeat:IPaddr2 \
    params ip="172.25.29.100" cidr_netmask="32" \
    op monitor interval="10s"
property $id="cib-bootstrap-options" \
    dc-version="1.1.10-14.el6-368c726" \
    cluster-infrastructure="classic openais (with plugin)" \
    expected-quorum-votes="2" \
   stonith-enabled="false"

[[email protected] ~]#cd /etc/httpd/conf
[[email protected] conf]#vim httpd.conf

921 <Location /server-status>
 922     SetHandler server-status
 923     Order deny,allow
 924     Deny from all
 925     Allow from 127.0.0.1
 926 </Location>

@@@@@@

[[email protected] ~]#yum install -y pacemaker
[[email protected] ~]#cd /etc/corosync/
[[email protected] corosync]#ls  
[[email protected] corosync]#/etc/init.d/corosync start

[[email protected] ~]#cd /etc/httpd/conf
[[email protected] conf]#vim httpd.conf

921 <Location /server-status>
 922     SetHandler server-status
 923     Order deny,allow
 924     Deny from all
 925     Allow from 127.0.0.1
 926 </Location>

[[email protected] corosync]#crm_mon

vip        (ocf::heartbeat:IPaddr2):    Started serevr3.example.com
mysite     (ocf::heartbeat:apache):      Started server4.example.com

(当vip和mysite 没有在一台电脑上工作时)

@@@@@@@@@@@@@@@@@

[[email protected] corosync]#crm
crm(live)# configure
crm(live)configure# colocation apache-with-vip inf: mysite vip
crm(live)configure# commit

[[email protected] corosync]#crm_mon

vip        (ocf::heartbeat:IPaddr2):    Started serevr3.example.com
mysite     (ocf::heartbeat:apache):      Started server3.example.com

@@@@@@@@@@@@@@@@@@@

[[email protected] corosync]#/etc/init.d/corosync start
[[email protected] corosync]#crm
crm(live)# configure
crm(live)configure# property no-quorum-policy=ignore
crm(live)configure# property stonith-enabled=true
crm(live)configure# commit
crm(live)configure# bye
[[email protected] corosync]#/etc/init.d/corosync stop     (服务转到server4上面)
[[email protected] corosync]#/etc/init.d/corosync start
[[email protected] corosync]#crm
crm(live)# configure
crm(live)configure# show

node server3.example.com
node server4.example.com
primitive mysite ocf:heartbeat:apache \
    params configfile="/etc/httpd/conf/httpd.conf" \
    op monitor interval="1min"
primitive vip ocf:heartbeat:IPaddr2 \
    params ip="172.25.29.100" cidr_netmask="32" \
    op monitor interval="1mins"
property $id="cib-bootstrap-options" \
    dc-version="1.1.10-14.el6-368c726" \
    cluster-infrastructure="classic openais (with plugin)" \
    expected-quorum-votes="2" \
   stonith-enabled="true
   last-lrm-refresh="1474513359" \
    no-quorum-policy="ignore"

[[email protected] corosync]# yum provides */fence_xvm
[[email protected] corosync]# stonith_admin -I
 fence_xvm
 fence_virt
 fence_pcmk
 fence_legacy
4 devices found

[[email protected] corosync]#crm
crm(live)# configure
crm(live)configure# cd
crm(live)# resource
crm(live)resource# show
 vip    (ocf::heartbeat:IPaddr2):    Started
 mysite    (ocf::heartbeat:apache):    Started  
 vmfence    (stonith:fence_xvm):    Stopped    (vmfence 关闭)
crm(live)resource# start vmfence
crm(live)resource# cleanup vmfence
Cleaning up vmfence on server3.example.com
Cleaning up vmfence on server4.example.com
Waiting for 1 replies from the CRMd. OK
crm(live)resource# show    (查看是否打开)
crm(live)resource# bye

[[email protected]~]#stonith_admin -a fence_xvm -M
[[email protected] corosync]#crm
crm(live)# configure
crm(live)configure# primitive vmfence stonith:fence_xvm params pcmk_host_map=server3.example.com:vm3;server4.example.com:vm4 op monitor interval=1min

crm(live)configure#commit

[[email protected] corosync]#/etc/init.d/corosync start
[[email protected] corosync]#crm_mon

vip        (ocf::heartbeat:IPaddr2):    Started serevr4.example.com
mysite     (ocf::heartbeat:apache):      Started server4.example.com       
vmfence    (stonith:fence_xvm):       Started server3.example.com

[[email protected] corosync]#/etc/init.d/httpd stop
[[email protected] corosync]#crm_mon   (当关掉http时服务并没有发生掉转)

vip        (ocf::heartbeat:IPaddr2):    Started serevr4.example.com
mysite     (ocf::heartbeat:apache):      Started server4.example.com       
vmfence    (stonith:fence_xvm):       Started server3.example.com

[[email protected] corosync]#ip addr del 172.25.29.100/32 dev eth0
                              (删掉虚拟IP让集群帮助调换服务)

[[email protected] corosync]#ip addr show  (查看)

[[email protected] corosync]#crm_mon       (此时服务发生改变)

vip        (ocf::heartbeat:IPaddr2):    Started serevr3.example.com
mysite     (ocf::heartbeat:apache):      Started server3.example.com       
vmfence    (stonith:fence_xvm):       Started server4.example.com

[[email protected] corosync]#/etc/init.d/network stop    
                       (用虚拟机查看验证服务跳转的效果)

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
############集群共享储存################

[[email protected] ~]# yum install scsi-target-utils  -y
[[email protected] ~]#chkconfig tgtd on
[[email protected] ~]#service tgtd start
[[email protected] ~]#dd if=/dev/zero of=/dev/vda bs=1024 count=1
[[email protected] ~]#/etc/init.d/tgtd start
[[email protected] ~]# vim /etc/tgt/targets.conf

38 <target iqn.2016-09.com.example:server.disk>
 39     backing-store /dev/vda
 40     initiator-address 172.25.29.3
 41     initiator-address 172.25.29.4
 42 </target>

[[email protected] ~]#tgt-admin -s  (查看共享分区的地址)

@@@@@@@@@@@@@@@@@@@@

[[email protected] ~]#yum install iscsi-initiator-utils -y
[[email protected] ~]#chkconfig iscsi on
[[email protected] ~]#service iscsi start
[[email protected] ~]#/etc/init.d/iscsi start
[[email protected] ~]#fdisk -l   (查看共享磁盘)(/dev/sdb)
[[email protected] ~]#fdisk -cu /dev/sdb    (分出一个4G sdb1的分区)
[[email protected] ~]#mkfs.ext4 /dev/sda1  
    @@@@@@@@@@@@@@@@@(此时两个节点共享一个分区)

[[email protected] ~]#yum install iscsi-initiator-utils -y
[[email protected] ~]#chkconfig iscsi on
[[email protected] ~]#service iscsi start
[[email protected] ~]#/etc/init.d/iscsi start
[[email protected] ~]#fdisk -l   (查看共享磁盘)

[[email protected] ~]#cat /proc/partitions   (查看验证是否共享)
[[email protected] ~]#mount /dev/sda1 /mnt/
[[email protected] ~]#cd /mnt/
[[email protected] mnt]#vim index.html  (添加测试页验证)

server3+serevr4

[[email protected] mnt]#cd
[[email protected] ~]#umount /mnt

@@@@@@@@@@@@@

[[email protected] corosync]#crm
crm(live)# configure
crm(live)configure# primitive webdata ocf:heartbeat:Filesystem params device=/dev/sdb1 directory=/var/www/html fstype=ext4 op monitor interval=1min
crm(live)configure#commit
crm(live)configure# show

(查看是否添加到文件里面)

[[email protected] corosync]#crm_mon

vip        (ocf::heartbeat:IPaddr2):    Started serevr3.example.com
mysite     (ocf::heartbeat:apache):      Started server3.example.com       
vmfence    (stonith:fence_xvm):       Started server4.example.com
webdata    (ocf::heartbeat:Filesystem):   Started server4.example.com

[[email protected] corosync]#crm    (当共享分区和vip,mysite 不同步时)
crm(live)# configure
crm(live)configure#  group webgroup vip webdata mysite
crm(live)configure#commit

[[email protected] corosync]#crm_mon

vmfence    (stonith:fence_xvm):       Started server4.example.com
   Resource Group: webgroup
vip        (ocf::heartbeat:IPaddr2):    Started serevr3.example.com
mysite     (ocf::heartbeat:apache):      Started server3.example.com       
webdata    (ocf::heartbeat:Filesystem):   Started server3.example.com

[[email protected] ~]#mount /dev/sda1 /var/www/html

访问172.25.29.100:找到储存中共享的文件

$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$

时间: 2024-12-28 23:35:49

高可用集群pacemake的相关文章

Linux高可用集群方案之heartbeat基础原理及逻辑架构

 这篇文章我们主要学习heartbeat高可用集群的基础原理及逻辑架构,以及heartbeat的简单配置  ll  本文导航    · heartbeat之基本原理   · heartbeat之集群组件   · heartbeat之心跳连接   · heartbeat之脑裂(资源争用.资源隔离) · heartbeat之配置文件   · heartbeat至高可用集群配置  ll  要求  掌握heartbeat高可用集群的相关组件及简单配置   heartbeat之基本原理  heartbea

CentOS 7 corosync高可用集群的实现

CentOS 7 corosync高可用集群的实现 =============================================================================== 概述: =============================================================================== 在CentOS 7上实现高可用集群案例  1.corosync安装配置 ★CentOS 7: corosync v2 (

高可用集群和负载均衡集群理解

高可用集群 HA(hight avaliable)即高可用,又被叫双机热备 防止服务器中断,影响对外提供服务 协议:Heartbeat使用心跳进行通信和选举 含义:一般是指当集群中的任意一个节点失效的情况下,节点上的所有任务自动转移到其他正常的节点上,并且此过程不影响整个集群的运行,不影响业务的提供. 负载均衡集群 LB(load balance)负载均衡集群 防止服务器发生瓶颈 协议:Keepalived使用VRRP协议进行通信和选举 用于用户请求量过大,服务器发生瓶颈,通过负载均衡,让用户可

linux 下heartbeat简单高可用集群搭建

Heartbeat 项目是 Linux-HA 工程的一个组成部分,它实现了一个高可用集群系统.通过Heartbeat我们可以实现双机热备,以实现服务的持续性. linux下基于heartbeat的简单web服务的高可用集群搭建 首先规划好两台主机作为heartbeat的双机热备,命名为node1.lvni.cc(主) ;node2.lvni.cc, node1的eth0IP :192.168.157.148  Vip eth0:0:192.168.157.149 node2的eth0IP :19

CoroSync + Drbd + MySQL 实现MySQL的高可用集群

Corosync + DRBD + MySQL 构建高可用MySQL集群 节点规划: node1.huhu.com172.16.100.103 node2.huhu.com172.16.100.104 资源名称规划 资源名称:可以是除了空白字符外的任意ACSII码字符 DRBD设备:在双节点上,此DRBD设备文件,一般为/dev/drbdN,主设备号147 磁盘:在双方节点上,各自提供存储设备 网络配置:双方数据同步所使用的网络属性 DRBD从Linux内核2.6.33起已经整合进内核 1.配置

利用heartbeat的ldirectord实现ipvs的高可用集群构建

集群架构拓扑图: 网络规划: 两台LVS server:(两台LVS也可以为用户提供错误页面) node1:172.16.31.10 node2:172.16.31.11 VIP:172.16.31.180 ipvs规则内包含2台Real Server:(后面的RS指的就是后端的web服务器) rs1:172.16.31.13 rs2:172.16.31.14 我们还需要错误页面提供者:我们选择LVS作为sorry server,所有的real server不可用时就指向这个sorry serv

linux高可用集群(HA)原理详解

高可用集群 一.什么是高可用集群 高可用集群就是当某一个节点或服务器发生故障时,另一个节点能够自动且立即向外提供服务,即将有故障节点上的资源转移到另一个节点上去,这样另一个节点有了资源既可以向外提供服务.高可用集群是用于单个节点发生故障时,能够自动将资源.服务进行切换,这样可以保证服务一直在线.在这个过程中,对于客户端来说是透明的. 二.高可用集群的衡量标准 高可用集群一般是通过系统的可靠性(reliability)和系统的可维护性(maintainability)来衡量的.通常用平均无故障时间

MM(主主数据库)+keepalived主备高可用集群

博客分享的第一篇技术文章: 项目主要搭建:主主数据库高可用集群搭建. 数据库互为主备,应用技术:MM+keepalived 使用的是虚拟机搭建的实验向大家展示: 数据库1:192.168.4.7 数据库2:192.168.4.77 VIP:192.168.4.68 web1:192.168.4.69 web2:192.168.4.70 一.安装mysql,部署主主同步结构. 直接yum安装 配置主主同步: 由于主数据库192.168.4.7里面存放着数据,所以需要先导出数据,方法很多,我们采取m

MySQL主从复制、读写分离、高可用集群搭建

MySQL主从复制.读写分离.高可用集群搭建  一.服务介绍   1.1 Keepalived     Keepalived,见名知意,即保持存活,其目的是解决单点故障,当一台服务器宕机或者故障时自动切换到其他的服务器中.Keepalived是基于VRRP协议实现的.VRRP协议是用于实现路由器冗余的协议,VRRP协议将两台或多台路由器设备虚拟成虚拟设备,可以对外提供虚拟路由器IP(一个或多个),即漂移IP(VIP). 1.2 ProxySQL ProxySQL是一个高性能,高可用性的MySQL