GPFS+Redhat6.2+V7000安装配置

1. V7000配置

1.1 POOL的划分

V7000共建立3个Array,形成一个pool mdiskgrp0,总的可用空间70.63TB:

1.2 LUN划分

在mdiskgrp0这个pool里边新建3个LUN,每个大小1TB。

1.3 将LUN分配给主机

建立两个host,每个host包含2个wwpn. 在Linux上查看WWPN方法如下:

主机一:

[[email protected] ~]# cat /sys/class/fc_host/host6/port_name

0x10000000c9cff748

[[email protected] ~]# cat /sys/class/fc_host/host7/port_name

0x10000000c9cff749

主机二:

[[email protected] ~]# cat /sys/class/fc_host/host6/port_name

0x10000000c9cfedb4

[[email protected] ~]# cat /sys/class/fc_host/host7/port_name

0x10000000c9cfedb5

将3个LUN分配给这两个主机,使它们可以同时看到3个磁盘。

2 SAN配置

2.1 SAN结构

每个主机2个光纤口。通过zone的划分,使得两个主机FC端口分别通过不同的交换机访问到V7000的2个端口,这2个端口分别属于V7000的A、B控制器。最终在Linux系统中看到4条path。

SAN拓扑图:

2.2 SAN switch 10.61的配置

2.2.1 Alias

建立4个alias,如下:

alias: YT_RHLinux170_host6  10:00:00:00:c9:cf:f7:48

alias: YT_RHLinux171_host6  10:00:00:00:c9:cf:ed:b4

alias: YT_V7000_Node1_port3 50:05:07:68:02:35:d6:07

alias: YT_V7000_Node2_port3 50:05:07:68:02:35:d6:08

2.2.2 Zone

建立两个zone,每个Zone包含一个主机端口和2个V7000端口:

zone: YT_RHLinux170_V7000

YT_RHLinux170_host6; YT_V7000_Node1_port3;

YT_V7000_Node2_port3

zone: YT_RHLinux171_V7000

YT_RHLinux171_host6; YT_V7000_Node1_port3;

YT_V7000_Node2_port3

2.3 SAN switch 10.62的配置

2.3.1 Alias

建立4个alias,如下:

alias: YT_RHLinux170_host7 10:00:00:00:c9:cf:f7:49

alias: YT_RHLinux171_host7 10:00:00:00:c9:cf:ed:b5

alias: YT_V7000_Node1_port4 50:05:07:68:02:45:d6:07

alias: YT_V7000_Node2_port4 50:05:07:68:02:45:d6:08

2.3.2 zone

建立两个zone,每个Zone包含一个主机端口和2个V7000端口:

zone: YT_RHLinux170_V7000

YT_RHLinux170_host7; YT_V7000_Node1_port4;

YT_V7000_Node2_port4

zone: YT_RHLinux171_V7000

YT_RHLinux171_host7; YT_V7000_Node1_port4;

YT_V7000_Node2_port4


3 配置多路径软件

3.1 查看多路径软件是否已经安装

[[email protected] ~]# rpm -qa |grep multipath

device-mapper-multipath-libs-0.4.9-46.el6.x86_64

device-mapper-multipath-0.4.9-46.el6.x86_64

3.2 编辑/etc/multipath.conf

新安装的系统,还没有/etc/multipath.conf这个文件。执行

mpathconf --enable --with_multipathd y                                                                                                       

产生multipath.conf文件

然后编辑/etc/multipath.conf:


[[email protected] ~]# cat /etc/multipath.conf

#blacklist_exceptions {

# device {

# vendor "IBM"

# product "S/390.*"

# }

#}

## Use user friendly names, instead of using WWIDs as names.

defaults {

polling_interval 30

failback immediate

no_path_retry queue

rr_min_io 100

path_checker tur

user_friendly_names yes

}

##

## Here is an example of how to configure some standard options.

##

#

#defaults {

# udev_dir /dev

# polling_interval 10

# selector "round-robin 0"

# path_grouping_policy multibus

# getuid_callout "/lib/udev/scsi_id --whitelisted --device=/dev/%n"

# prio alua

# path_checker readsector0

# rr_min_io 100

# max_fds 8192

# rr_weight priorities

# failback immediate

# no_path_retry fail

# user_friendly_names yes

#}

##

## The wwid line in the following blacklist section is shown as an example

## of how to blacklist devices by wwid. The 2 devnode lines are the

## compiled in default blacklist. If you want to blacklist entire types

## of devices, such as all scsi devices, you should use a devnode line.

## However, if you want to blacklist specific devices, you should use

## a wwid line. Since there is no guarantee that a specific device will

## not change names on reboot (from /dev/sda to /dev/sdb for example)

## devnode lines are not recommended for blacklisting specific devices.

##

#blacklist {

# wwid 26353900f02796769

# devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"

# devnode "^hd[a-z]"

#}

#multipaths {

# multipath {

# wwid 3600508b4000156d700012000000b0000

# alias yellow

# path_grouping_policy multibus

# path_checker readsector0

# path_selector "round-robin 0"

# failback manual

# rr_weight priorities

# no_path_retry 5

# }

# multipath {

# wwid 1DEC_____321816758474

# alias red

# }

#}

devices {

# device {

# vendor "COMPAQ "

# product "HSV110 (C)COMPAQ"

# path_grouping_policy multibus

# getuid_callout "/lib/udev/scsi_id --whitelisted --device=/dev/%n"

# path_checker readsector0

# path_selector "round-robin 0"

# hardware_handler "0"

# failback 15

# rr_weight priorities

# no_path_retry queue

# }

# device {

# vendor "COMPAQ "

# product "MSA1000 "

# path_grouping_policy multibus

# }

# SVC

device {

vendor "IBM"

product "2145"

path_grouping_policy group_by_prio

prio "alua"

}

}

blacklist {

# devnode ".*"

}

[[email protected] ~]#

3.3 配置多路径进程开机自动启动

chkconfig multipathd on

chkconfig --level 345 multipathd on

3.4 重启服务让配置生效

service multipathd reload                                                                                                                                               

Multipath –v2识别多路径,如果仍然看不到磁盘,则需要reboot系统。

3.5 查看多路径

使用命令multipath –ll查看多路径情况:

[[email protected] ~]# multipath –ll

mpathd (36005076802810b5fe000000000000002) dm-2 IBM,2145

size=1.0T features=‘1 queue_if_no_path‘ hwhandler=‘0‘ wp=rw

|-+- policy=‘round-robin 0‘ prio=50 status=active

| |- 6:0:0:2 sdc 8:32 active ready running

| `- 7:0:0:2 sdj 8:144 active ready running

`-+- policy=‘round-robin 0‘ prio=10 status=enabled

|- 6:0:1:2 sdf 8:80 active ready running

`- 7:0:1:2 sdm 8:192 active ready running

mpathc (36005076802810b5fe000000000000001) dm-1 IBM,2145

size=1.0T features=‘1 queue_if_no_path‘ hwhandler=‘0‘ wp=rw

|-+- policy=‘round-robin 0‘ prio=50 status=active

| |- 6:0:1:1 sde 8:64 active ready running

| `- 7:0:1:1 sdl 8:176 active ready running

`-+- policy=‘round-robin 0‘ prio=10 status=enabled

|- 6:0:0:1 sdb 8:16 active ready running

`- 7:0:0:1 sdi 8:128 active ready running

mpathb (36005076802810b5fe000000000000000) dm-0 IBM,2145

size=1.0T features=‘1 queue_if_no_path‘ hwhandler=‘0‘ wp=rw

|-+- policy=‘round-robin 0‘ prio=50 status=active

| |- 6:0:0:0 sda 8:0 active ready running

| `- 7:0:0:0 sdh 8:112 active ready running

`-+- policy=‘round-robin 0‘ prio=10 status=enabled

|- 6:0:1:0 sdd 8:48 active ready running

`- 7:0:1:0 sdk 8:160 active ready running

4 GPFS配置过程

4.1 GPFS规划

我们要建立的是一个保护两个Redhat6.2 Linux节点。使用共享存储V7000的共3TB磁盘空间。

节点名


OS版本


磁盘


集群名


NSD


IaYT2d3A01-gpfs


Redhat6.2


dm-0;dm-1;dm-2


3ADB_CLUSTER


nsd01;nsd02;nsd03


IaYT2d3A02-gpfs


Redhat6.2


dm-0;dm-1;dm-2


3ADB_CLUSTER


nsd01;nsd02;nsd03

为了理解方便,在以下配置中提到的node1均指IaYT2d3A01-gpfs

4.2 安装前提软件

gpfs需要ksh, compat-libstdc++, kernel-devel, gcc, gcc-c++, cpp软件包

yum -y install ksh compat-libstdc++ kernel-devel gcc gcc-c++ cpp

4.3 安装GPFS软件

软件已经上传至两个Linux服务器的/tmp/gpfs下,解压后自动存放于/usr/lpp/mmfs/3.4下:

# chmod a+x gpfs_install_3.4.0-0_x86_64

# /tmp/gpfs_install_3.4.0-0_x86_64 --text-only

安装:

rpm -ivh /usr/lpp/mmfs/3.4/gpfs*.rpm                                                                                                                                 

4.4 安装gpfs补丁

在IBM网站

http://www-933.ibm.com/support/fixcentral/

下载最新GPFS补丁,上传至/tmp/gpfs/ptf。

#gzip -ad GPFS-3.4.0.24-x86_64-Linux.tar.gz

#tar -xvf GPFS-3.4.0.24-x86_64-Linux.tar

#rpm -Uvh /tmp/gpfs/ptf/gpfs*.rpm


检查安装结果:

[[email protected] ptf]# rpm -qa|grep gpfs

gpfs.gpl-3.4.0-24.noarch

gpfs.docs-3.4.0-24.noarch

gpfs.base-3.4.0-24.x86_64

gpfs.msg.en_US-3.4.0-24.noarch

4.5 编辑/etc/hosts

每个系统配置一个内网IP用于GPFS节点间通信。

然后将IP和主机名加到每个系统的/etc/hosts中,例如:

[[email protected] ptf]# cat /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

10.143.5.170 IaYT2d3A01

10.143.13.172 IaYT2d3A01-gpfs

10.143.5.171 IaYT2d3A02

10.143.13.173 IaYT2d3A02-gpfs

4.6 编译生成Portability layer

GPFS需要借助Portability layer与kernel打交道。我们使用GPFS自带的工具进行编译。

每个节点分别执行:

cd /usr/lpp/mmfs/src

make Autoconfig

make World

make InstallImages

4.7 添加环境变量

每个节点上分别执行:

将下面一行加到/root/ .bash_profile中:

PATH=$PATH:$HOME/bin:/usr/lpp/mmfs/bin                                                              

4.8 建立节点间信任文件

1) 以下命令在每个节点上执行一遍

#cd /root/.ssh

#ssh-keygen -t rsa

#ssh-keygen -t dsa

2) 在node 1上执行

# cat /root/.ssh/id_rsa.pub>>/root/.ssh/authorized_keys

# cat /root/.ssh/id_dsa.pub>>/root/.ssh/authorized_keys

#ssh node2 cat /root/.ssh/id_rsa.pub>>/root/.ssh/authorized_keys

#ssh node2 cat /root/.ssh/id_dsa.pub>>/root/.ssh/authorized_keys

#scp /root/.ssh/authorized_keys node2:/root/.ssh/authorized_keys

3) 测试两个节点的信任关系

以下命令在两个节点分别执行

ssh node1 date

ssh node2 date

4.9 创建GPFS cluster

4.9.1 编辑节点定义文件

在node1,也即IaYT2d3A01上执行vi /tmp/gpfs/gpfsnodes, 加入下面两行:

IaYT2d3A01-gpfs:quorum-manager

IaYT2d3A02-gpfs:quorum-manager

4.9.2 创建cluster

1) 在node1上执行

mmcrcluster -n /tmp/gpfs/gpfsnodes -p IaYT2d3A01-gpfs -s IaYT2d3A02-gpfs -C 3ADB_CLUSTER -r /usr/bin/ssh -R /usr/bin/scp

2) 然后启动GPFS:

mmstartup –a                                                                                                                                                                                       

4.9.3 接受License

在node1上执行:

mmchlicense server --accept -N IaYT2d3A01-gpfs,IaYT2d3A02-gpfs                                                                                               

4.9.4 查看cluster配置

[[email protected] .ssh]# mmlscluster

GPFS cluster information

========================

GPFS cluster name: 3ADB_CLUSTER.IaYT2d3A01-gpfs

GPFS cluster id: 760841895814613401

GPFS UID domain: 3ADB_CLUSTER.IaYT2d3A01-gpfs

Remote shell command: /usr/bin/ssh

Remote file copy command: /usr/bin/scp

GPFS cluster configuration servers:

-----------------------------------

Primary server: IaYT2d3A01-gpfs

Secondary server: IaYT2d3A02-gpfs

Node Daemon node name IP address Admin node name Designation

-----------------------------------------------------------------------------------------------

1 IaYT2d3A01-gpfs 10.143.13.172 IaYT2d3A01-gpfs quorum-manager

2 IaYT2d3A02-gpfs 10.143.13.173 IaYT2d3A02-gpfs quorum-manager

4.10 创建NSD

在node1上建立一个文件/tmp/gpfs/gpfsnsd,包含:

dm-0:::dataAndMetadata:1:nsd01

dm-1:::dataAndMetadata:1:nsd02

dm-2:::dataAndMetadata:1:nsd03

然后执行命令:

mmcrnsd -F gpfsnsd

4.11 创建GPFS文件系统

在Node1上执行:

mmcrfs /gpfs gpfsdev -F gpfsnsd -B 512K -A yes -n 3

4.12 配置GPFS参数

1) 停止GPFS服务

mmshutdown -a

2) 在node1 上执行:

mmchconfig autoload=yes

mmchconfig pagepool=512M

mmchconfig prefetchThreads=72

mmchconfig worker1Threads=300

mmchconfig maxMBpS=1024

mmchconfig tiebreakerDisks="nsd01"

3) 启动GPFS服务:

mmstartup -a

4.13 检查GPFS状态

[[email protected] .ssh]# mmlscluster

GPFS cluster information

========================

GPFS cluster name: 3ADB_CLUSTER.IaYT2d3A01-gpfs

GPFS cluster id: 760841895814613401

GPFS UID domain: 3ADB_CLUSTER.IaYT2d3A01-gpfs

Remote shell command: /usr/bin/ssh

Remote file copy command: /usr/bin/scp

GPFS cluster configuration servers:

-----------------------------------

Primary server: IaYT2d3A01-gpfs

Secondary server: IaYT2d3A02-gpfs

Node Daemon node name IP address Admin node name Designation

-----------------------------------------------------------------------------------------------

1 IaYT2d3A01-gpfs 10.143.13.172 IaYT2d3A01-gpfs quorum-manager

2 IaYT2d3A02-gpfs 10.143.13.173 IaYT2d3A02-gpfs quorum-manager

[[email protected] .ssh]# df -m

Filesystem 1M-blocks Used Available Use% Mounted on

/dev/sdg3 837862 6611 788690 1% /

tmpfs 16038 1 16038 1% /dev/shm

/dev/sdg1 124 32 86 28% /boot

/dev/gpfsdev 3145728 9934 3135794 1% /gpfs

[[email protected] .ssh]# mmdf gpfs

mmdf: File system gpfs is not known to the GPFS cluster.

mmdf: Command failed. Examine previous error messages to determine cause.

[[email protected] .ssh]# mmdf gpfsdev

disk disk size failure holds holds free KB free KB

name in KB group metadata data in full blocks in fragments

--------------- ------------- -------- -------- ----- -------------------- -------------------

Disks in storage pool: system (Maximum disk size allowed is 8.0 TB)

nsd01 1073741824 1 Yes Yes 1070353920 (100%) 1328 ( 0%)

nsd02 1073741824 1 Yes Yes 1070350336 (100%) 1328 ( 0%)

nsd03 1073741824 1 Yes Yes 1070348800 (100%) 960 ( 0%)

------------- -------------------- -------------------

(pool total) 3221225472 3211053056 (100%) 3616 ( 0%)

============= ==================== ===================

(total) 3221225472 3211053056 (100%) 3616 ( 0%)

Inode Information

-----------------

Number of used inodes: 4009

Number of free inodes: 496727

Number of allocated inodes: 500736

Maximum number of inodes: 3146752

[[email protected] .ssh]# mmgetstate -a -L

Node number Node name Quorum Nodes up Total nodes GPFS state Remarks

------------------------------------------------------------------------------------

1 IaYT2d3A01-gpfs 1* 2 2 active quorum node

2 IaYT2d3A02-gpfs 1* 2 2 active quorum node

[[email protected] .ssh]# mmlsconfig

Configuration data for cluster 3ADB_CLUSTER.IaYT2d3A01-gpfs:

------------------------------------------------------------

myNodeConfigNumber 1

clusterName 3ADB_CLUSTER.IaYT2d3A01-gpfs

clusterId 760841895814613401

autoload yes

minReleaseLevel 3.4.0.7

dmapiFileHandleSize 32

pagepool 512M

prefetchThreads 72

worker1Threads 300

maxMBpS 1024

tiebreakerDisks nsd01

adminMode central

File systems in cluster 3ADB_CLUSTER.IaYT2d3A01-gpfs:

-----------------------------------------------------

/dev/gpfsdev

[[email protected] .ssh]# mmlsnsd

File system Disk name NSD servers

---------------------------------------------------------------------------

gpfsdev nsd01 (directly attached)

gpfsdev nsd02 (directly attached)

gpfsdev nsd03 (directly attached)

[[email protected] .ssh]# mmlsdisk gpfsdev -L

disk driver sector failure holds holds storage

name type size group metadata data status availability disk id pool remarks

------ ------ ------ ------- -------- ----- -------- ------------ ------- ----------------

nsd01 nsd 512 1 Yes Yes ready up 1 system desc

nsd02 nsd 512 1 Yes Yes ready up 2 system desc

nsd03 nsd 512 1 Yes Yes ready up 3 system desc

Number of quorum disks: 3

Read quorum value: 2

Write quorum value: 2

[[email protected] .ssh]# mmlsfs gpfsdev

flag value description

------------------- ------------------------ -----------------------------------

-f 16384 Minimum fragment size in bytes

-i 512 Inode size in bytes

-I 16384 Indirect block size in bytes

-m 1 Default number of metadata replicas

-M 2 Maximum number of metadata replicas

-r 1 Default number of data replicas

-R 2 Maximum number of data replicas

-j cluster Block allocation type

-D nfs4 File locking semantics in effect

-k all ACL semantics in effect

-n 3 Estimated number of nodes that will mount file system

-B 524288 Block size

-Q none Quotas enforced

none Default quotas enabled

--filesetdf No Fileset df enabled?

-V 12.10 (3.4.0.7) File system version

--create-time Thu Sep 26 13:54:52 2013 File system creation time

-u Yes Support for large LUNs?

-z No Is DMAPI enabled?

-L 4194304 Logfile size

-E Yes Exact mtime mount option

-S No Suppress atime mount option

-K whenpossible Strict replica allocation option

--fastea Yes Fast external attributes enabled?

--inode-limit 3146752 Maximum number of inodes

-P system Disk storage pools in file system

-d nsd01;nsd02;nsd03 Disks in file system

-A yes Automatic mount option

-o none Additional mount options

-T /gpfs Default mount point

--mount-priority 0 Mount priority

原文地址:https://www.cnblogs.com/shuijinzi/p/11951971.html

时间: 2024-08-27 14:23:49

GPFS+Redhat6.2+V7000安装配置的相关文章

redhat6.5下安装配置kvm虚拟机

-------------------------- 一.前言 二.环境 三.安装与配置 四.创建kvm虚拟机 五.管理kvm虚拟机 六.克隆kvm虚拟机 -------------------------- 一.前言 KVM,即Kernel-based Virtual Machine的简称,是一个开源的系统虚拟化模块,自Linux 2.6.20之后集成在Linux的各个主要发行版本中.它使用Linux自身的调度器进行管理,所以相对于Xen,其核心源码很少.KVM目前已成为学术界的主流VMM之一

RedHat 7 安装配置Tomcat 8

测试完RedHat6安装配置Tomcat7(请见http://jiangjianlong.blog.51cto.com/3735273/1852740)后,再来测试下RedHat 7安装配置Tomcat 8,同样也是只安装了JRE,未安装JDK.. 测试环境如下: 操作系统:Red Hat Enterprise Linux Server release 7.2 (Maipo),最小化安装 Tomcat:apache-tomcat-8.5.4.tar.gz JRE:jre-8u102-linux-

cobbler安装配置

一.服务端安装配置: (1.)cobbler服务端(主机名为rhel4,ip地址为192.168.122.40)采用redhat6.4系统,部署自动安装redhat6.4客户端系统: (2.)保证cobbler服务端可以上外网,可以epel在线安装所需软件包: (3.)cobbler服务端安装epel: (4.)安装cobbler软件包(包含httpd,rsync,xinetd,tftp-server都是必须安装的):       #cobbler服务端 (5)启动cobbler与httpd服务

Redhat6.4下安装Oracle10g

Oracle10g_Redhat6.4 安装指南 文档说明 本文借鉴<Redhat_Linux_6.4下Oracle_10g安装配置手册><Redhat 6.4 安装 Oracle10g 血泪史>等网络文章,经过自己实践后修改所得. 虽有雷同,但已经实践后修改. 安装环境 VMware Workstation 12.1[红帽企业Linux.6.4.服务器版].rhel-server-6.4-x86_64-dvd[ED2000.COM].iso10201_database_linux

RedHat6.2 x86手动配置LNMP环境

因为公司要求用RedHat配,顺便让我练习一下Linux里面的操作什么的. 折腾来折腾去终于搞好了,其实也没那么难嘛.但是也要记录一下. 首先,是在服务器里面用VMware搭建的RedHat6.2 x86系统.在RedHat里面yum里面的源基本是收费的.CentOS呢,是RedHat的衍生版,目的就是打破redhat的收费,所以两者也没啥区别.直接就用CentOS6的yum包来配置了. 首先,在终端里输入: cd /etc/yum.repos.d/ 这里面是放yum源的地方.默认里面会有一个r

redhat linux 6.2 安装配置GUI

redhat6.2默认不安装GUI,启动时默认进入text模式,下面介绍下安装.配置GUI的步骤: 1.登录root 2.配置及测试yum vi /etc/yum.repos.d/rhel-source.repo --第一节不修改 --修改第二节如下三项 baseurl=file:///media/cdrom/Server enabled=1 gpgcheck=0 --测试yum yum list 3.配置cdrom cd /media/ mkdir cdrom mount /dev/cdrom

Redhat cacti安装配置详细过程

最近安装了一套cacti监控系统,现在做个笔记 所用系统为REDHAT6.2 X86_64 安装过程全部使用yum源RPM包安装 首先同步时间,rrdtool是以时间为驱动来记录信息点的,时间不对会导致不出图 date -s "2015-11-12 09:57:55" 挂载光盘 [[email protected] ~]# mkdir /mnt/cdrom [[email protected] ~]# mount -t iso9660 /dev/cdrom /mnt/cdrom 用系统

在redhat6.4下安装 Oracle&#174; Database 11g Release 2

OS版本: 安装过程的相关信息: pdksh 安装好后根据需要设置oracle开机自启动http://www.cnblogs.com/softidea/p/3761671.html 设置环境变量NLS_LANG,ORACLE_SID 配置tnsnames.ora http://blog.itpub.net/21162451/viewspace-721930/ 在redhat6.4下安装 Oracle® Database 11g Release 2

zabbix server安装配置

环境:RedHat6.5 x64.Apache2.4.12.PHP5.6.6.MySQL5.6.23.Zabbix2.4.4 在同一台服务器上安装配置 首先配置主机hosts表! 一.MySQL安装 安装过程参见<MySQL安装配置>,在这里多安装MySQL-devel和shared包,安装zabbix时需要. rpm -ivh MySQL-devel-advanced-5.6.23-1.el6.x86_64.rpm rpm -ivh MySQL-shared-advanced-5.6.23-