高可用集群是指以减少服务中断时间为目的的服务器集群技术。它通过保护用户的业务程序对外不间断提供的服务,把因软件/硬件/人为造成的故障对业务的影响降低到最小程度。高可用集群的应用系统有多样化发展趋势,用途也越来越多样化,同时带来了配置及可操作性方面的复杂性,因此选择好的高可用软件至关重要。
实验环境:真机:172.25.254.29
server4:172.25.29.4
server5:172.25.29.5
server6:172.25.29.6
一.首先配置好高可用集群yum源;每台实验服务器一样;相互解析。
[[email protected] ~]# vim /etc/yum.repos.d/yum.repo
[base]
name=Instructor Server Repository
baseurl=http://172.25.29.450/rhel6.5
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
# HighAvailability rhel6.5
[HighAvailability]
name=Instructor HighAvailability Repository
baseurl=http://172.25.29.450/rhel6.5/HighAvailability
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
# LoadBalancer packages
[LoadBalancer]
name=Instructor LoadBalancer Repository
baseurl=http://172.25.29.450/rhel6.5/LoadBalancer
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
# LoadBalancer packages
[LoadBalancer]
name=Instructor LoadBalancer Repository
baseurl=http://172.25.29.450/rhel6.5/LoadBalancer
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
# ResilientStorage
[ResilientStorage]
name=Instructor ResilientStorage Repository
baseurl=http://172.25.29.450/rhel6.5/ResilientStorage
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
# ScalableFileSystem
[ScalableFileSystem]
name=Instructor ScalableFileSystem Repository
baseurl=http://172.25.29.450/rhel6.5/ScalableFileSystem
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
[[email protected] ~]# yum clean all
[[email protected] ~]# yum repolist
[[email protected] ~]#vim /etc/hosts
172.25.29.4 server4.example.com
172.25.29.5 server5.example.com
172.25.29.6 server6.example.com
172.25.254.29
二.server4和server5作为高可用服务器;server6安装lici用作图形界面
[[email protected] ~]# yum install -y ricci
[[email protected] ~]# passwd ricci
更改用户 ricci 的密码 。
新的 密码:
无效的密码: 它基于字典单词
无效的密码: 过于简单
重新输入新的 密码:
passwd: 所有的身份验证令牌已经成功更新。
[[email protected] ~]# /etc/init.d/ricci start
[[email protected] ~]# chkconfig ricci on
[[email protected] ~]# yum install -y ricci
[[email protected] ~]# passwd ricci
更改用户 ricci 的密码 。
新的 密码:
无效的密码: 它基于字典单词
无效的密码: 过于简单
重新输入新的 密码:
passwd: 所有的身份验证令牌已经成功更新。
[[email protected] ~]# /etc/init.d/ricci start
[[email protected] ~]# chkconfig ricci on
[[email protected] ~]# yum install luci -y
[[email protected] ~]# /etc/init.d/luci start
Start luci... [确定]
Point your web browser to https://server6.example.com:8084 (or equivalent) to access luci
打开连接把server4和server5合成一个高可用组;
三.真机安装fence
[[email protected] ~]# rpm -qa|grep fence
fence-virtd-serial-0.3.2-1.el7.x86_64
fence-virtd-0.3.2-1.el7.x86_64
fence-virtd-libvirt-0.3.2-1.el7.x86_64
fence-virtd-multicast-0.3.2-1.el7.x86_64
[[email protected] ~]# yum install -y fence-virtd*
[[email protected] ~]# fence_virtd -c
Module search path [/usr/lib64/fence-virt]:
Listener module [multicast]:
Multicast IP Address [225.0.0.12]:
Multicast IP Port [1229]:
Interface [br0]:
Key File [/etc/cluster/fence_xvm.key]:
Backend module [libvirt]:
Replace /etc/fence_virt.conf with the above [y/N]? y
[[email protected] ~]# systemctl restart fence_virtd
[[email protected] ~]# systemctl status fence_virtd
[[email protected] ~]# netstat -anulp |grep 1229
udp 0 0 0.0.0.0:1229 0.0.0.0:* 205
[[email protected] ~]# cd /etc/
[[email protected] etc]# mkdir cluster/
[[email protected] etc]# dd if=/dev/random of=/etc/cluster/fence_xvm.key bs=128 count=1
[[email protected] cluster]# ls
fence_xvm.key
[[email protected] cluster]# file fence_xvm.key
fence_xvm.key: data
[[email protected] cluster]# systemctl restart fence_virtd
[[email protected] cluster]# vim /etc/fence_virt.conf
[[email protected] cluster]# scp fence_xvm.key [email protected]:/etc/cluster/
[email protected]‘s password:
fence_xvm.key 100% 128 0.1KB/s 00:00
[[email protected] cluster]# scp fence_xvm.key [email protected]:/etc/cluster/
[email protected]‘s password:
fence_xvm.key 100% 128 0.1KB/s 00:00
四.对集群进行设置;让集群统一管理服务器
[[email protected] ~]# clustat 查看状态
Cluster Status for www @ Sat Sep 17 18:56:42 2016
Member Status: Quorate
Member Name ID Status
------ ---- ---- ------
server4.example.com 1 Online
server5.example.com 2 Online, Local
[[email protected] ~]#fence_node server5.example.com
@@@@@用server4把server5 (fence)server5重启
[[email protected] ~]# yum install httpd -y (安装httpd不要开启;让组帮助开启)
[[email protected] ~]# cd /var/www/html
[[email protected] html]# vim index.html (添加发布内容方便验证)
[[email protected] cluster]#echo c > /proc/sysrq-trigger (当server4损坏时组会把server4 踢出去server5接手;server4重启后又会自动加入组中等待)
[[email protected] cluster]#/etc/init.d/httpd stop (当关掉server5的httpd时组会自动把server5踢出去;server4接手;server5重器后又会自动加入组中)