1. LVM基本创建及管理
2. LVM快照
3. LVM与RAID的结合使用:基于RAID的LVM
LVM创建:
描述:
LVM是逻辑盘卷管理(LogicalVolumeManager)的简称,它是Linux环境下对磁盘分区进行管理的一种机制,LVM是建立在硬盘和分区之上的一个逻辑层,来提高磁盘分区管理的灵活性.
通过创建LVM,我们可以更轻松的管理磁盘分区,将若干个不同大小的不同形式的磁盘整合为一个整块的卷组,然后在卷组上随意的创建逻辑卷,既避免了大量不同规格硬盘的管理难题,也使逻辑卷容量的扩充缩减不再受限于磁盘规格;并且LVM的snapshot(快照)功能给数据的物理备份提供了便捷可靠的方式;
创建LVM过程;(如图)
1. 在物理设备上创建物理分区,每个物理分区称为一个PE
2. 使用fdisk工具创建物理分区卷标(修改为8e),形成PV(Physical Volume 物理卷)
3. 使用vgcreate 将多个PV添加到一个VG(Volume Group 卷组)中,此时VG成为一个大磁盘;
4. 在VG大磁盘上划分LV(Logical Volume 逻辑卷),将逻辑卷格式化后即可挂载使用;
各阶段可用的命令工具:(详细选项信息请be a man)
st1\:*{behavior:url(#ieooui) }
阶段 |
显示信息 |
创建 |
删除组员 |
扩大大小 |
缩减大小 |
PV |
pvdisplay |
pvcreat |
pvremove |
----- |
----- |
VG |
vgdisplay |
vgcreat |
vgremove |
vgextend |
vgreduce |
LV |
lvdispaly |
lvcreat |
lvremove |
lvextend |
lvreduce |
创建示例:
1. 创建PV
Device Boot Start End Blocks Id System /dev/sdb1 1 609 4891761 8e Linux LVM /dev/sdc1 1 609 4891761 8e Linux LVM /dev/sdd1 1 609 4891761 8e Linux LVM [[email protected] ~]# pvcreate /dev/sd[bcd]1 Physical volume "/dev/sdb1" successfully created Physical volume "/dev/sdc1" successfully created Physical volume "/dev/sdd1" successfully created
查看PV信息:
[[email protected] ~]# pvdisplay --- Physical volume --- PV Name /dev/sda2 VG Name vol0 PV Size 40.00 GB / not usable 2.61 MB Allocatable yes PE Size (KByte) 32768 Total PE 1280 Free PE 281 Allocated PE 999 PV UUID GxfWc2-hzKw-tP1E-8cSU-kkqY-z15Z-11Gacd "/dev/sdb1" is a new physical volume of "4.67 GB" --- NEW Physical volume --- PV Name /dev/sdb1 VG Name PV Size 4.67 GB Allocatable NO PE Size (KByte) 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID 1rrc9i-05Om-Wzd6-dM9G-bo08-2oJj-WjRjLg "/dev/sdc1" is a new physical volume of "4.67 GB" --- NEW Physical volume --- PV Name /dev/sdc1 VG Name PV Size 4.67 GB Allocatable NO PE Size (KByte) 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID RCGft6-l7tj-vuBX-bnds-xbLn-PE32-mCSeE8 "/dev/sdd1" is a new physical volume of "4.67 GB" --- NEW Physical volume --- PV Name /dev/sdd1 VG Name PV Size 4.67 GB Allocatable NO PE Size (KByte) 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID SLiAAp-43zX-6BC8-wVzP-6vQu-uyYF-ugdWbD
2. 创建VG
- [[[email protected] ~]# vgcreate myvg -s 16M /dev/sd[bcd]1
- Volume group "myvg" successfully created
- ##-s 在创建时指定PE块的大小,默认是4M。
- 查看系统上VG状态
- [[email protected] ~]# vgdisplay
- --- Volume group ---
- VG Name myvg
- System ID
- Format lvm2
- Metadata Areas 3
- Metadata Sequence No 1
- VG Access read/write
- VG Status resizable
- MAX LV 0
- Cur LV 0
- Open LV 0
- Max PV 0
- Cur PV 3
- Act PV 3
- VG Size 13.97 GB ##VG总大小
- PE Size 16.00 MB ##默认的PE块大小是4M
- Total PE 894 ##总PE块数
- Alloc PE / Size 0 / 0 ##已经使用的PE块数目
- Free PE / Size 894 / 13.97 GB ##可用的PE数目及磁盘大小
- VG UUID RJC6Z1-N2Jx-2Zjz-26m6-LLoB-PcWQ-FXx3lV
3. 在VG中划分出LV:
[[email protected] ~]# lvcreate -L 256M -n data1 myvg Logical volume "data1" created ## -L指定LV大小 ## -n 指定lv卷名称 [[email protected] ~]# lvcreate -l 20 -n test myvg Logical volume "test" created ## -l 指定LV大小占用多少个PE块;上面大小为:20*16M=320M [[email protected] ~]# lvdisplay --- Logical volume --- LV Name /dev/myvg/data VG Name myvg LV UUID d0SYy1-DQ9T-Sj0R-uPeD-xn0z-raDU-g9lLeK LV Write Access read/write LV Status available # open 0 LV Size 256.00 MB Current LE 16 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:2 --- Logical volume --- LV Name /dev/myvg/test VG Name myvg LV UUID os4UiH-5QAG-HqOJ-DoNT-mVeT-oYyy-s1xArV LV Write Access read/write LV Status available # open 0 LV Size 320.00 MB Current LE 20 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:3
4. 然后就可以格式化LV,挂载使用:
[[email protected] ~]# mkfs -t ext3 -b 2048 -L DATA /dev/myvg/data [[email protected] ~]# mount /dev/myvg/data /data/ ## 拷贝进去一些文件,测试后面在线扩展及缩减效果: [[email protected] data]# cp /etc/* /data/
5. 扩大LV容量:(2.6kernel+ext3filesystem)
lvextend命令可以增长逻辑卷
resize2fs可以增长filesystem在线或非在线
(我的系统内核2.6.18,ext3文件系统,查man文档,现在只有2.6的内核+ext3文件系统才可以在线增容量)
首先逻辑扩展:
[[email protected] data]# lvextend -L 500M /dev/myvg/data Rounding up size to full physical extent 512.00 MB Extending logical volume data to 512.00 MB Logical volume data successfully resized ##-L 500M :指扩展到500M,系统此时会找最近的柱面进行匹配; ##-L +500M :值在原有大小的基础上扩大500M; ##-l [+]50 类似上面,但是以Pe块为单位进行扩展;
然后文件系统物理扩展:
[[email protected] data]# resize2fs -p /dev/myvg/data resize2fs 1.39 (29-May-2006) Filesystem at /dev/myvg/data is mounted on /data; on-line resizing required Performing an on-line resize of /dev/myvg/data to 262144 (2k) blocks. The filesystem on /dev/myvg/data is now 262144 blocks long. ##据上面信息,系统自动识别并进行了在线扩展; 查看状态: [[email protected] ~]# lvdisplay --- Logical volume --- LV Name /dev/myvg/data VG Name myvg LV UUID d0SYy1-DQ9T-Sj0R-uPeD-xn0z-raDU-g9lLeK LV Write Access read/write LV Status available # open 1 LV Size 512.00 MB Current LE 32 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:2 ##此时查看挂载目录,文件应该完好;
6. 缩减LV容量:
缩减容量是一件危险的操作;缩减必须在离线状态下执行;并且必须先强制检查文件系统错误,防止缩减过程损坏数据;
[[email protected] ~]# umount /dev/myvg/data [[email protected] ~]# e2fsck -f /dev/myvg/data e2fsck 1.39 (29-May-2006) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information DATA: 13/286720 files (7.7% non-contiguous), 23141/573440 blocks
先缩减物理大小:
[[email protected] ~]# resize2fs /dev/myvg/data 256M resize2fs 1.39 (29-May-2006) Resizing the filesystem on /dev/myvg/data to 131072 (2k) blocks. The filesystem on /dev/myvg/data is now 131072 blocks long.
再缩减逻辑大小:
[[email protected] ~]# lvreduce -L 256M /dev/myvg/data WARNING: Reducing active logical volume to 256.00 MB THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce data? [y/n]: y Reducing logical volume data to 256.00 MB Logical volume data successfully resized
查看状态、重新挂载:
[[email protected] ~]# lvdisplay --- Logical volume --- LV Name /dev/myvg/data VG Name myvg LV UUID d0SYy1-DQ9T-Sj0R-uPeD-xn0z-raDU-g9lLeK LV Write Access read/write LV Status available # open 0 LV Size 256.00 MB Current LE 16 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:2 重新挂载: [[email protected] ~]# mount /dev/myvg/data /data/ [[email protected] data]# df /data/ Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/myvg-data 253900 9508 236528 4% /data
7. 扩展VG,向VG中添加一个PV:
[[email protected] data]# pvcreate /dev/sdc2 Physical volume "/dev/sdc2" successfully created [[email protected] data]# vgextend myvg /dev/sdc2 Volume group "myvg" successfully extended [[email protected] data]# pvdisplay --- Physical volume --- PV Name /dev/sdc2 VG Name myvg PV Size 4.67 GB / not usable 9.14 MB Allocatable yes PE Size (KByte) 16384 Total PE 298 Free PE 298 Allocated PE 0 PV UUID hrveTu-2JUH-aSgT-GKAJ-VVv2-Hit0-PyoOOr
8. 缩减VG,取出VG中的某个PV:
移除某个PV时,需要先转移该PV上数据到其他PV,然后再将该PV删除;
移出指定PV中的数据:
- [[email protected] data]# pvmove /dev/sdc2
- No data to move for myvg
- ##如果sdc2上面有数据,则会花一段时间移动,并且显示警告信息,再次确认后才会执行;
- ##如上,提示该分区中没有数据;
移除PV:
[[email protected] data]# vgreduce myvg /dev/sdc2 Removed "/dev/sdc2" from volume group "myvg" ##若发现LVM中磁盘工作不太正常,怀疑是某一块磁盘工作由问题后就可以用该方法移出问题盘##上数据,然后删掉问题盘;
LVM快照:
描述:
在一个非常繁忙的服务器上,备份大量的数据时,需要停掉大量的服务,否则备份下来的数据极容易出现不一致状态,而使备份根本不能起效;这时快照就起作用了;
原理:
逻辑卷快照实质是访问原始数据的另外一个路径而已;快照保存的是做快照那一刻的数据状态;做快照以后,任何对原始数据的修改,会在修改前拷贝一份到快照区域,所以通过快照查看到的数据永远是生成快照那一刻的数据状态;但是对于快照大小有限制,做快照前需要估算在一定时间内数据修改量大小,如果在创建快照期间数据修改量大于快照大小了,数据会溢出照成快照失效崩溃;
快照不是永久的。如果你卸下LVM或重启,它们就丢失了,需要重新创建。
创建快照:
[[email protected] ~]# lvcreate -L 500M -p r -s -n datasnap /dev/myvg/data Rounding up size to full physical extent 512.00 MB Logical volume "datasnap" created ## -L –l 设置大小 ## -p :permission,设置生成快照的读写权限,默认为RW;r为只读 ##-s 指定lvcreate生成的是一个快照 ##-n 指定快照名称 挂载快照到指定位置: [[email protected] ~]# mount /dev/myvg/datasnap /backup/ mount: block device /dev/myvg/datasnap is write-protected, mounting read-only 然后备份出快照中文件即可,备份后及时删除快照: [[email protected] ~]# ls /backup/ inittab lost+found
基于RAID的LVM的建立:
描述:
基于RAID的LVM,可以在底层实现RAID对数据的冗余或是提高读写性能的基础上,可以在上层实现LVM的可以灵活管理磁盘的功能;
如图:
建立过程:
1. 建立LinuxRAID形式的分区:
Device Boot Start End Blocks Id System /dev/sdb1 1 62 497983+ fd Linux raid autodetect /dev/sdc1 1 62 497983+ fd Linux raid autodetect /dev/sdd1 1 62 497983+ fd Linux raid autodetect /dev/sde1 1 62 497983+ fd Linux raid autodetect
2. 创建RAID-5:
- [[email protected] ~]# mdadm -C /dev/md0 -a yes -l 5 -n 3 -x 1 /dev/sd{b,c,d,e}1
- mdadm: array /dev/md0 started.
- # RAID-5有一个spare盘,三个活动盘。
- # 查看状态,发现跟创建要求一致,且创建正在进行中:
- [[email protected] ~]# cat /proc/mdstat
- Personalities : [raid6] [raid5] [raid4]
- md0 : active raid5 sdd1[4] sde1[3](S) sdc1[1] sdb1[0]
- 995712 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]
- [=======>.............] recovery = 36.9% (184580/497856) finish=0.2min speed=18458K/sec
- unused devices: <none>
- # 查看RAID详细信息:
- [[email protected] ~]# mdadm -D /dev/md0
- /dev/md0:
- Version : 0.90
- Creation Time : Wed Apr 6 04:27:46 2011
- Raid Level : raid5
- Array Size : 995712 (972.54 MiB 1019.61 MB)
- Used Dev Size : 497856 (486.27 MiB 509.80 MB)
- Raid Devices : 3
- Total Devices : 4
- Preferred Minor : 0
- Persistence : Superblock is persistent
- Update Time : Wed Apr 6 04:36:08 2011
- State : clean
- Active Devices : 3
- Working Devices : 4
- Failed Devices : 0
- Spare Devices : 1
- Layout : left-symmetric
- Chunk Size : 64K
- UUID : 5663fc8e:68f539ee:3a4040d6:ccdac92a
- Events : 0.4
- Number Major Minor RaidDevice State
- 0 8 17 0 active sync /dev/sdb1
- 1 8 33 1 active sync /dev/sdc1
- 2 8 49 2 active sync /dev/sdd1
- 3 8 65 - spare /dev/sde1
3. 建立RAID配置文件:
- [[email protected] ~]# mdadm -Ds > /etc/mdadm.conf
- [[email protected] ~]# cat /etc/mdadm.conf
- ARRAY /dev/md0 level=raid5 num-devices=3 metadata=0.90 spares=1 UUID=5663fc8e:68f539ee:3a4040d6:ccdac92a
4. 基于刚建立的RAID设备创建LVM:
# 将md0创建成为PV(物理卷): [[email protected] ~]# pvcreate /dev/md0 Physical volume "/dev/md0" successfully created # 查看物理卷: [[email protected] ~]# pvdisplay "/dev/md0" is a new physical volume of "972.38 MB" --- NEW Physical volume --- PV Name /dev/md0 VG Name #此时该PV不包含在任何VG中,故为空; PV Size 972.38 MB Allocatable NO PE Size (KByte) 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID PUb3uj-ObES-TXsM-2oMS-exps-LPXP-jD218u # 创建VG [[email protected] ~]# vgcreate myvg -s 32M /dev/md0 Volume group "myvg" successfully created # 查看VG状态: [[email protected] ~]# vgdisplay --- Volume group --- VG Name myvg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 960.00 MB PE Size 32.00 MB Total PE 30 Alloc PE / Size 0 / 0 Free PE / Size 30 / 960.00 MB VG UUID 9NKEWK-7jrv-zC2x-59xg-10Ai-qA1L-cfHXDj # 创建LV: [[email protected] ~]# lvcreate -L 500M -n mydata myvg Rounding up size to full physical extent 512.00 MB Logical volume "mydata" created # 查看LV状态: [[email protected] ~]# lvdisplay --- Logical volume --- LV Name /dev/myvg/mydata VG Name myvg LV UUID KQQUJq-FU2C-E7lI-QJUp-xeVd-3OpA-TMgI1D LV Write Access read/write LV Status available # open 0 LV Size 512.00 MB Current LE 16 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 512 Block device 253:2