Linux运维 第五阶段(九)iSCSI & cLVM & gfs2

Linux运维 第五阶段(九)iSCSI&cLVM&gfs2

gfs2(global file system version2,全局文件系统,CFS集群文件系统,利用HA的信息层,向各node通告自己所持有锁的信息)

cLVM(cluster logical volume management,集群逻辑卷管理,将共享存储做成逻辑卷,借用HA的心跳传输机制(通信机制,对于脑裂处理的机制),各node要启动clvmd服务(此服务启动前要启动cman和rgmanager),使得各node彼此间通信)

准备四个node(node{1,2,3}为使用共享存储,node4为提供共享存储并作为跳板机)

node{1,2,3}准备好yum源、时间要同步、节点名称、/etc/hosts,node4上要能与node{1,2,3}双机互信

(1)准备共享存储

node4-side:

[[email protected] ~]# vim /etc/tgt/targets.conf

default-driver iscsi

<targetiqn.2015-07.com.magedu:teststore.disk1>

<backing-store /dev/sdb>

vender_id magedu

lun 1

</backing-store>

incominguser iscsi iscsi

initiator-address 192.168.41.131

initiator-address 192.168.41.132

initiator-address 192.168.41.133

</target>

[[email protected] ~]# service tgtd restart

[[email protected] ~]# netstat -tnlp(3260/tcp,tgtd)

[[email protected] ~]# tgtadm --lld iscsi --mode target --op show

……

LUN: 1

……

Account information:

iscsi

ACL information:

192.168.41.131

192.168.41.132

192.168.41.133

[[email protected] ~]# alias ha=‘for I in{1..3};do ssh node$I‘

[[email protected] ~]# ha ‘rm -rf/var/lib/iscsi/send_targets/*’;done

node{1,2,3}-side:

[[email protected] ~]# vim /etc/iscsi/iscsid.conf

node.session.auth.authmethod = CHAP

node.session.auth.username = iscsi

node.session.auth.password = iscsi

node4-side:

[[email protected] ~]# ha ‘service iscsi restart’;done

[[email protected] ~]# ha ‘iscsiadm -m discovery -t st -p 192.168.41.134’;done

[[email protected] ~]# ha ‘iscsiadm -m node -T iqn.2015-07.com.magedu:teststore.disk1 -p 192.168.41.134 -l’;done

[[email protected] ~]# fdisk -l

Disk /dev/sdb: 10.7 GB, 10737418240 bytes

(2)安装cman、rgmanager、gfs2-utils、lvm2-cluster

node4-side:

[[email protected] ~]# for I in {1..3};do scp /root/{cman*,rgmanager*,gfs2-utils*,lvm2-cluster*} node$I:/root/;ssh node$I‘ yum -y --nogpgcheck localinstall /root/*.rpm‘;done

node1-side:

[[email protected] ~]# ccs_tool create tcluster

[[email protected] ~]# ccs_tool addfence meatware fence_manual

[[email protected] ~]# ccs_tool addnode -v 1 -n 1-f meatware node1.magedu.com

[[email protected] ~]# ccs_tool addnode -v 1 -n 2-f meatware node2.magedu.com

[[email protected] ~]# ccs_tool addnode -v 1 -n 3-f meatware node3.magedu.com

[[email protected] ~]# service cman start(第一次启动初始化时,最好使用工具system-config-cluster将组播地址改掉,不要与其它集群用相同的默认组播地址,否则会接收到其它集群传来的同步信息无法正常启动;或在启动前将node1的配置文件/etc/cluster/cluster.conf复制到其它node再启动)

[[email protected] ~]# clustat

node1.magedu.com                                                   1 Online, Local

node2.magedu.com                                                   2 Online

node3.magedu.com                                                   3 Online

node2-side:

[[email protected] ~]# service cman start

node3-side:

[[email protected] ~]# service cman start

(3)cLVM配置:

node1-side:

[[email protected] ~]# rpm -ql lvm2-cluster

/etc/rc.d/init.d/clvmd

/usr/sbin/clvmd

/usr/sbin/lvmconf

[[email protected] ~]# vim /etc/lvm/lvm.conf(每个node都要改此配置文件)

locking_type = 3(Type 3 uses built-in clustered locking,将此项1改为3,1表示本地基于文件锁Defaults to local file-based locking)

node4-side:

[[email protected] ~]# ha ‘service clvmd start‘;done

node1-side:

[[email protected] ~]# pvcreate /dev/sdb

Writing physical volume data to disk "/dev/sdb"

Physical volume "/dev/sdb" successfully created

[[email protected] ~]# pvs(在其它node也可看到)

PV         VG         Fmt Attr PSize  PFree

/dev/sdb              lvm2a--  10.00G 10.00G

[[email protected] ~]# vgcreate clustervg /dev/sdb

Clustered volume group "clustervg" successfully created

[[email protected] ~]# vgs

VG         #PV #LV #SN Attr   VSize VFree

clustervg    1   0   0wz--nc 10.00G 10.00G

[[email protected] ~]# lvcreate -L 5G -n clusterlv clustervg

Logical volume "clusterlv" created

[[email protected] ~]# lvs

LV        VG         Attr  LSize  Origin Snap%  Move Log Copy%  Convert

clusterlv clustervg  -wi-a-  5.00G

(4)gfs2配置:

node1-side:

[[email protected] ~]# rpm -ql gfs2-utils

/etc/rc.d/init.d/gfs2

/sbin/fsck.gfs2

/sbin/gfs2_convert

/sbin/gfs2_edit

/sbin/gfs2_fsck

/sbin/gfs2_grow

/sbin/gfs2_jadd

/sbin/gfs2_quota

/sbin/gfs2_tool

/sbin/mkfs.gfs2

/sbin/mount.gfs2

/sbin/umount.gfs2

[[email protected] ~]# mkfs.gfs2 -h

#mkfs.gfs2 OPTIONS  DEVICE

options:

-b  #(blocksize指定块大小,默认4096bytes)

-D(Enable debugging output)

-j NUMBER(The number of journals for gfs2_mkfs to create,指定日志区域的个数,有几个node挂载使用就创建几个,默认创建1个)

-J  #(The size of the journals in Megabytes,日志区域大小,默认128M)

-p NAME(Lock ProtoName is the name of the locking  protocol to use,锁协议名,两种,通常使用lock_dlm,若是一个node用lock_nolock,若仅一个node使用单机FS即可,用不着集群FS)

-t NAME(The  lock table field appropriate to the lock module you’re using,锁表名称,格式为CLUSTERNAME:LOCKTABLENAME,clustername为当前node所在的集群名称,locktablename要在当前集群内唯一;一个集群内可以使用多个集群文件系统,锁表名称用于区分哪一个node在哪一个集群文件系统上所持有的锁)

[[email protected] ~]# mkfs.gfs2 -j 3 -p lock_dlm -t tcluster:lktb1 /dev/clustervg/clusterlv(格式化集群文件系统会很慢)

This will destroy any data on/dev/clustervg/clusterlv.

Are you sure you want to proceed? [y/n] y

Device:                    /dev/clustervg/clusterlv

Blocksize:                 4096

Device Size                5.00 GB (1310720 blocks)

Filesystem Size:           5.00 GB (1310718 blocks)

Journals:                  3

Resource Groups:           20

Locking Protocol:          "lock_dlm"

Lock Table:                "tcluster:lktb1"

UUID:                     D8B10B8F-7EE2-A818-E392-0DF218411F2C

[[email protected] ~]# mkdir /mydata

[[email protected] ~]# mount -t gfs2 /dev/clustervg/clusterlv /mydata

node2-side:

[[email protected] ~]# mkdir /mydata

[[email protected] ~]# mount -t gfs2 /dev/clustervg/clusterlv /mydata

[[email protected] ~]# ls /mydata

[[email protected] ~]# touch /mydata/b.txt

[[email protected] ~]# ls /mydata

b.txt

node3-side:

[[email protected] ~]# mkdir /mydata

[[email protected] ~]# mount -t gfs2 /dev/clustervg/clusterlv /mydata

[[email protected] ~]# touch /mydata/c.txt

[[email protected] ~]# ls /mydata

b.txt c.txt

node1-side:

[[email protected] ~]# ls /mydata

b.txt c.txt

注:每个node对CFS的操作会立即同步到磁盘,并告知其它node,所以严重影响系统性能

(5)调试:

[[email protected] ~]# gsf2_tool -h(interface to gfs2 ioctl/sysfs calls)

#gfs2_tool df|journals|gettune|freeze|unfreeze|getargs  MOUNT_POINT

#gfs2_tool list

[[email protected] ~]# gfs2_tool list(List the currently mounted GFS2 filesystems)

253:2 tcluster:lktb1

[[email protected] ~]# gfs2_tool journals /mydata(rint out information about the journals in a mounted filesystem)

journal2 - 128MB

journal1 - 128MB

journal0 - 128MB

3 journal(s) found.

[[email protected] ~]# gfs2_tool df /mydata

/mydata:

SBlock proto = "lock_dlm"

SBlock table = "tcluster:lktb1"

SBondisk format = 1801

SBmultihost format = 1900

Block size = 4096

Journals = 3

Resource Groups = 20

Mounted lock proto = "lock_dlm"

Mounted lock table = "tcluster:lktb1"

Mounted host data = "jid=0:id=196609:first=1"

Journal number = 0

Lock module flags = 0

Local flocks = FALSE

Local caching = FALSE

Type           Total Blocks   Used Blocks    Free Blocks    use%

------------------------------------------------------------------------

data           1310564        99293          1211271        8%

inodes         1211294        23             1211271        0%

[[email protected] ~]# gfs2_tool freeze /mydata(Freeze (quiesce)a GFS2 cluster,任意一个node对CFS操作会卡住,直至unfreeze)

[[email protected] ~]# gfs2_tool getargs /mydata

statfs_percent 0

data 2

suiddir 0

quota 0

posix_acl 0

upgrade 0

debug 0

localflocks 0

localcaching 0

ignore_local_fs 0

spectator 0

hostdata jid=0:id=196609:first=1

locktable

lockproto

[[email protected] ~]# gfs2_tool gettune /mydata(Print out the current values of the tuning parameters in a running filesystem,若要调整某一项,使用settune,并在挂载点后直接跟上指令和值,如#gfs2_tool settune /mydata new_files_directio=1)

new_files_directio = 0

new_files_jdata = 0

quota_scale = 1.0000   (1, 1)

logd_secs = 1

recoverd_secs = 60

statfs_quantum = 30

stall_secs = 600

quota_cache_secs = 300

quota_simul_sync = 64

statfs_slow = 0

complain_secs = 10

max_readahead = 262144

quota_quantum = 60

quota_warn_period = 10

jindex_refresh_secs = 60

log_flush_secs = 60

incore_log_blocks = 1024

[[email protected] ~]# gfs2_jadd -j 1 /dev/clustervg/clusterlv(添加日志区域,1表示新增的个数(是新增个数不是总的节点数),若集群中node数增加了,可通过gfs2_jadd增加日志区域)

[[email protected] ~]# lvextend -L 8G /dev/clustervg/clusterlv(extend the size of a logical volume

,扩展逻辑卷大小,可理解为扩展物理边界)

Extending logical volume clusterlv to 8.00 GB

Logical volume clusterlv successfully resized

[[email protected] ~]# gfs2_grow /dev/clustervg/clusterlv(Expand a GFS2 filesystem,扩展集群文件系统,可理解为扩展逻辑边界,注意一定要执行此步骤,重要)

FS: Mount Point: /mydata

FS: Device:      /dev/mapper/clustervg-clusterlv

FS: Size:        1310718 (0x13fffe)

FS: RG size:     65533 (0xfffd)

DEV: Size:       2097152 (0x200000)

The file system grew by 3072MB.

Error fallocating extra space : File toolarge

gfs2_grow complete.

[[email protected] ~]# lvresize -L -3G /dev/clustervg/clusterlv(减小逻辑卷大小)

[[email protected] ~]# gfs2_grow /dev/clustervg/clusterlv

[[email protected] ~]# lvs

LV        VG         Attr  LSize  Origin Snap%  Move Log Copy%  Convert

clusterlv clustervg  -wi-ao  5.00G

以上是学习《马哥运维课程》做的笔记

Linux运维 第五阶段(九)iSCSI & cLVM & gfs2

时间: 2024-07-30 20:32:39

Linux运维 第五阶段(九)iSCSI & cLVM & gfs2的相关文章

Linux运维 第五阶段(四) corosync&pacemaker

Linux运维 第五阶段(四)corosync&pacemaker 一.相关概念: 补充 { what is high Availability? A=MTBF/(MTBF+MTTR) MTBF(mean time betweenfailures平均无故障时间) MTTR(mean time to repair平均修复时间) two ways improve availability? increase MTBF to very large values reduce MTTR to very

Linux运维 第五阶段(八)iSCSI

一.相关概念: computer architecture计算机体系结构 如图: north bridge以上是核心系统(核心单元) south bridge以下是外围总线(I/O系统,提供辅助性功能) USB device和IDE device能提供持久性存储,若从这些设备上读写数据时,要先加载至内存中,CPU是在内存中进行处理 IDE存储总线(连接设备的数量少,一个控制器上只能连接一主一从两个设备,主板上提供的IDE口也很难有很多,IDE在设计上实现协议数据单元PDU封装的能力很差,并且经此

Linux运维 第五阶段(一)集群相关概念及LVS(LB)

一.集群相关概念 1.cluster-LB(load balancing) http(stateless无状态协议,就算使用长连接也有时间限定) 每一页面有众多的web object 若一台server配置4Gmemory,2*cpu:若同时200个请求中有50个dynamic(200个网络IO.磁盘IO都由CPU处理,150*2M=50*10M=800M):若200à1000个请求(1000个请求中200个dynamic,800*2M=200*10M=3.6G,除网络IO磁盘IO还有CS进程间

Linux运维 第五阶段(十一)keepalived+{nginx,haproxy}

环境: [[email protected] ~]# uname -a Linux node1.magedu.com2.6.32-358.el6.x86_64 #1 SMP Tue Jan 29 11:47:41 EST 2013 x86_64 x86_64 x86_64GNU/Linux 准备: VIP(192.168.41.222) node1(nginx|haproxy's master,192.168.41.133,安装nginx|haproxy和keepalived) node2(ng

Linux运维课程 第一阶段 重难点摘要(五)CISCO

Linux运维课程第一阶段重难点摘要(五)CISCO 一.高级路由管理 1.        路由:数据包从一台设备通过网络发往另一台不同网络中的设备,路由器不关心这些主机,它们只关心网络和通向每个网络的最佳路径.目的主机的IP地址用来保证数据包可以通过路由到达目的网络,而主机的MAC地址用于将数据包从路由器投递到目的主机. 静态路由:由管理员手动输入到路由表中的路由:不占用带宽,不会随着网络拓扑的变化而变化,缺少灵活性: 动态路由:通过动态学习得到路由:占用网络带宽和CPU资源:会随着网络拓扑的

Linux运维 第三阶段 (十九) varnish(1)

Linux运维 第三阶段 (十九) varnish 一.相关概念: http/1.0-->http/1.1(重大改进:对缓存功能实现了更精细化的设计) RFC(request file comment,每一种协议都有请求注解文档,讲协议规范) http页面由众多的web object组成,有些是静态,有些是通过程序执行后生成的:为加速web的访问,browser中引入了缓存机制,能将访问的静态内容或可缓存的动态内容缓存到本地,而后client再次到原始server上请求之前相同的内容时,如果原始

Linux运维 第三阶段 (十七) memcached

一.相关概念: memcached.org(live journal站点贡献的),很多流行站点都在用,如wikipedia.twitter.youtube.mixi等,memcached是非常流行的缓存服务,众多的应用程序开发基本都支持memcached缓存(C有C库,C++有C++库,php有php库,开发时都可直接调用memcached功能,若某个应用程序开发时不用memcached,它就不能往memcached中缓存数据,缓存数据与否取决于app自身,由app决定缓不缓存用不用它) mem

Linux运维课程 第一阶段 重难点摘要(四)CISCO

Linux运维课程第一阶段重难点摘要(四)CISCO 一.路由器接口操作: 1.#show running-config  查看接口 2.#interface fastethernet 0/1      进入f0/1配置(0/1,0代表插槽1代表端口.若是s0/0/0,第一个0表示路由器本身) 3.#interface fastEthernet 0/1 #description tachingroute    接口描述 4.#do show running-config         do 

Linux运维 第三阶段 (十一)iptables

Linux运维第三阶段(十一)iptables iptables linux防火墙:netfilter(框架framework):iptables(生成防火墙规则并将其附加在netfilter上,真正实现数据报文过滤.NAT.mangle等规则生成的工具):真正起作用的是规则,规则放在netfilter上才能生效 网络防火墙的功能根据TCP/IP首部实现的 IP报文(见文末附图): fragment ID(段标识) MF(more fragment) DF(don't fragment,单个报文