创建RHCS集群环境
1.1 问题
准备四台KVM虚拟机,其三台作为集群节点,一台安装luci并配置iSCSI存储服务,实现如下功能:
- 使用RHCS创建一个名为tarena的集群
- 集群中所有节点均需要挂载iSCSI共享存储
- 使用集群中任意节点对iSCSI设置进行分区格式化
- 安装luci的虚拟主机要求额外添加一块20G硬盘
- 物理主机IP地址为192.168.4.1,主机名称为desktop1.example.com
1.2 方案
使用4台虚拟机,1台作为luci和iSCSI服务器、3台作为节点服务器,拓扑结构如图-1所示。
图-1
所有主机的主机名及对应的IP地址如表-1所示。
表-1 主机名称及对应IP地址表
1.3 步骤
实现此案例需要按照如下步骤进行。
步骤一:安装前准备
1)为所有节点配置yum源,注意所有的虚拟主机均需要挂载安装光盘。
- [[email protected] ~]# mount /dev/cdrom /media
- [[email protected] ~]# rm -rf /etc/yum.repos.d/*
- [[email protected] ~]# vim /etc/yum.repos.d/dvd.repo
- [dvd]
- name=red hat
- baseurl=file:///media/
- enabled=1
- gpgcheck=0
- [HighAvailability]
- name=HighAvailability
- baseurl=file:///media/HighAvailability
- enabled=1
- gpgcheck=0
- [LoadBalancer]
- name=LoadBalancer
- baseurl=file:///media/LoadBalancer
- enabled=1
- gpgcheck=0
- [ResilientStorage]
- name=ResilientStorage
- baseurl=file:///media/ResilientStorage
- enabled=1
- gpgcheck=0
- [ScalableFileSystem]
- name=ScalableFileSystem
- baseurl=file:///media/ScalableFileSystem
- enabled=1
- gpgcheck=0
- [[email protected] ~]# yum clean all
- [[email protected] ~]# mount /dev/cdrom /media
- [[email protected] ~]# rm –rf /etc/yum.repos.d/*
- [[email protected] ~]# vim /etc/yum.repos.d/dvd.repo
- [dvd]
- name=red hat
- baseurl=file:///media/
- enabled=1
- gpgcheck=0
- [HighAvailability]
- name=HighAvailability
- baseurl=file:///media/HighAvailability
- enabled=1
- gpgcheck=0
- [LoadBalancer]
- name=LoadBalancer
- baseurl=file:///media/LoadBalancer
- enabled=1
- gpgcheck=0
- [ResilientStorage]
- name=ResilientStorage
- baseurl=file:///media/ResilientStorage
- enabled=1
- gpgcheck=0
- [ScalableFileSystem]
- name=ScalableFileSystem
- baseurl=file:///media/ScalableFileSystem
- enabled=1
- gpgcheck=0
- [[email protected] ~]# yum clean all
- [[email protected] ~]# mount /dev/cdrom /media
- [[email protected] ~]# rm -rf /etc/yum.repos.d/*
- [[email protected] ~]# vim /etc/yum.repos.d/dvd.repo
- [dvd]
- name=red hat
- baseurl=file:///media/
- enabled=1
- gpgcheck=0
- [HighAvailability]
- name=HighAvailability
- baseurl=file:///media/HighAvailability
- enabled=1
- gpgcheck=0
- [LoadBalancer]
- name=LoadBalancer
- baseurl=file:///media/LoadBalancer
- enabled=1
- gpgcheck=0
- [ResilientStorage]
- name=ResilientStorage
- baseurl=file:///media/ResilientStorage
- enabled=1
- gpgcheck=0
- [ScalableFileSystem]
- name=ScalableFileSystem
- baseurl=file:///media/ScalableFileSystem
- enabled=1
- gpgcheck=0
- [[email protected] ~]# yum clean all
- [[email protected] ~]# mount /dev/cdrom /media
- [[email protected] ~]# rm -rf /etc/yum.repos.d/*
- [[email protected] ~]# vim /etc/yum.repos.d/dvd.repo
- [dvd]
- name=red hat
- baseurl=file:///media/
- enabled=1
- gpgcheck=0
- [HighAvailability]
- name=HighAvailability
- baseurl=file:///media/HighAvailability
- enabled=1
- gpgcheck=0
- [LoadBalancer]
- name=LoadBalancer
- baseurl=file:///media/LoadBalancer
- enabled=1
- gpgcheck=0
- [ResilientStorage]
- name=ResilientStorage
- baseurl=file:///media/ResilientStorage
- enabled=1
- gpgcheck=0
- [ScalableFileSystem]
- name=ScalableFileSystem
- baseurl=file:///media/ScalableFileSystem
- enabled=1
- gpgcheck=0
- [[email protected] ~]# yum clean all
2)修改/etc/hosts并同步到所有主机。
- [[email protected] ~]# vim /etc/hosts
- 192.168.4.1 node1.example.com
- 192.168.4.2 node2.example.com
- 192.168.4.3 node3.example.com
- 192.168.4.4 luci.example.com
- [[email protected] ~]# for i in {1..3};do scp /etc/hosts 192.168.4.$i:/etc/;done
3)所有节点关闭NetworkManager、SELinux服务。
- [[email protected] ~]# service NetworkManager stop
- [[email protected] ~]# chkconfig NetworkManager off
- [[email protected] ~]# sed -i ‘/SELINUX=/s/enforcing/permissive/‘ /etc/sysconfig/selinux
- [[email protected] ~]# setenforce 0
- [[email protected] ~]# iptables -F; service iptables save
- [[email protected] ~]# service NetworkManager stop
- [[email protected] ~]# chkconfig NetworkManager off
- [[email protected] ~]# sed -i ‘/SELINUX=/s/enforcing/permissive/‘ /etc/sysconfig/selinux
- [[email protected] ~]# setenforce 0
- [[email protected] ~]# iptables -F; service iptables save
- [[email protected] ~]# service NetworkManager stop
- [[email protected] ~]# chkconfig NetworkManager off
- [[email protected] ~]# sed -i ‘/SELINUX=/s/enforcing/permissive/‘ /etc/sysconfig/selinux
- [[email protected] ~]# setenforce 0
- [[email protected] ~]# iptables -F; service iptables save
- [[email protected] ~]# service NetworkManager stop
- [[email protected] ~]# chkconfig NetworkManager off
- [[email protected] ~]# sed -i ‘/SELINUX=/s/enforcing/permissive/‘ /etc/sysconfig/selinux
- [[email protected] ~]# setenforce 0
- [[email protected] ~]# iptables -F; service iptables save
步骤二:部署iSCSI服务
1)在luci主机上部署iSCSI服务器,将/dev/sdb使用iSCSI服务共享。
提示:服务器IQN名称为: iqn.2015-06.com.example.luci:cluster。
- [[email protected] ~]# yum -y install scsi-target-utils //安装软件
- .. ..
- [[email protected] ~]# rpm -q scsi-target-utils
- scsi-target-utils-1.0.24-10.el6.x86_64
- [[email protected] ~]# vim /etc/tgt/targets.conf
- <target iqn.2015-06.com.example.luci:cluster>
- # List of files to export as LUNs
- backing-store /dev/sdb //定义存储设备
- initiator-address 192.168.4.0/24 //定义ACL
- </target>
- [[email protected] ~]# service tgtd start //启动服务
- Starting SCSI target daemon: [ OK ]
- [[email protected] ~]# chkconfig tgtd on
2)所有节点服务器挂载该iSCSI共享。
- [[email protected] ~]# yum -y install iscsi-initiator-utils //安装软件
- [[email protected] ~]# iscsiadm -m discovery -t sendtargets -p 192.168.4.4:3260
- [[email protected] ~]# iscsiadm -m node -T \
- >iqn.2015-06.com.example.luci:cluster \
- >-p 192.168.4.4:3260 -l //挂载iSCSI共享
- [[email protected] ~]# iscsiadm -m node -T \
- >iqn.2015-06.com.example.luci:cluster \
- >-p 192.168.4.4:3260 -l
- [[email protected] ~]# yum -y install iscsi-initiator-utils //安装软件
- [[email protected] ~]# iscsiadm -m discovery -t sendtargets -p 192.168.4.4:3260
- [[email protected] ~]# iscsiadm -m node -T \
- >iqn.2015-06.com.example.luci:cluster \
- >-p 192.168.4.4:3260 –l //挂载iSCSI共享
- [[email protected] ~]# iscsiadm -m node -T \
- >iqn.2015-06.com.example.luci:cluster \
- >-p 192.168.4.4:3260 -l
- [[email protected] ~]# yum -y install iscsi-initiator-utils //安装软件
- [[email protected] ~]# iscsiadm -m discovery -t sendtargets -p 192.168.4.4:3260
- [[email protected] ~]# iscsiadm -m node -T \
- >iqn.2015-06.com.example.luci:cluster \
- >-p 192.168.4.4:3260 -l //挂载iSCSI共享
- [[email protected] ~]# iscsiadm -m node -T \
- >iqn.2015-06.com.example.luci:cluster \
- >-p 192.168.4.4:3260 –l
步骤三:安装集群软件
1)在luci.example.com主机上安装luci,并启动服务。
- [[email protected] ~]# yum –y install luci
- [[email protected] ~]# service luci start;chkconfig luci on
2)在所有的集群节点中安装ricci,并启动服务。
- [[email protected] ~]# yum -y install ricci
- [[email protected] ~]# echo "11111" |passwd --stdin ricci
- [[email protected] ~]# service ricci start;chkconfig ricci on
- [[email protected] ~]# yum -y install ricci
- [[email protected] ~]# echo "11111" |passwd --stdin ricci
- [[email protected] ~]# service ricci start;chkconfig ricci on
- [[email protected] ~]# yum -y install ricci
- [[email protected] ~]# echo "11111" |passwd --stdin ricci
- [[email protected] ~]# service ricci start;chkconfig ricci on
步骤四:配置集群
1)浏览器访问luci,任意主机使用浏览器访问即可。
- [[email protected] ~]# firefox https://luci.example.com:8084
2)创建集群。
使用浏览器访问luici页面后,点击“Manage Clusters“页面,点击”Create“按钮创建一个新的集群,如图-2所示。
图-2
接下来,在弹出的回话框中,输入集群的名称“tarena“,勾选”Download Packages“、“Reboot Nodes Befor Joining Cluster”、“Enable Shared Storage Support”,效果如图-3所示。
图-3
等待所有的节点重启之后,在luci页面将显示如图-4所示的页面,代表所有节点以及加入到了tarena集群中。
图-4
提示:如果节点重启后,有部分节点无法自动加入集群中,可以将正常的节点系统中/etc/cluster/cluster.conf文件同步到其他失败的节点中,并确保失败节点的cman和rgmanager服务为启动状态即可。