POC环境:
实验环境须添加3块磁盘
一.创建RAID5操作:
[[email protected] ~]# uname -a
Linux localhost.localdomain2.6.32-358.el6.x86_64 #1 SMP Tue Jan 29 11:47:41 EST 2013 x86_64 x86_64 x86_64GNU/Linux
[[email protected] ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.4(Santiago)
1.添加磁盘并分区,可以每块磁盘只分一个区,分区类型为fd
[[email protected] ~]# fdisk /dev/sdb
Command (m for help): t
Selected partition 1
Device Boot Start End Blocks Id System
/dev/sdb1 1 6527 52428096 fd Linux raid autodetect
Hex code (type L to list codes): fd
[[email protected] ~]# fdisk /dev/sdc
[[email protected] ~]# fdisk /dev/sdd
2.建立raid5卷:
[[email protected] ~]# mdadm --create /dev/md0 --level=5 --raid-devices=3/dev/sdb1 /dev/sdc1 /dev/sdd1
mdadm:Defaulting to version 1.2 metadata
mdadm:array /dev/md0 started.
3.格式化raid5卷:
[[email protected] ~]# mkfs.ext4 /dev/md0
4.查看卷的信息并写入配置文件:
[[email protected] ~]# mdadm --detail --scan
ARRAY /dev/md0 metadata=1.2name=localhost.localdomain:0 UUID=7c870ec1:e16dd689:d786b14a:2f48e7b4
[[email protected] ~]# mdadm --detail --scan >>/etc/mdadm.conf
二.raid5部署成LVM:
1.把分区变成pv
[[email protected] data]# pvcreate /dev/md0
2.把pv加入到一个叫vg1的vg
[[email protected] data]# vgcreate vg1 /dev/md0
3.把vg中取出20G做一个叫lv1的卷:
[[email protected] data]# lvcreate -L 20G -n lv1 vg1
4.格式化lv1卷:
[[email protected] data]# mkfs.ext4 /dev/vg1/lv1
[email protected] ~]# mkdir /data
[[email protected] ~]# mount /dev/vg1/lv1 /data
[[email protected] ~]# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root
16102344 966144 14318232 7% /
tmpfs 247208 0 247208 0% /dev/shm
/dev/sda1 495844 37615 432629 8% /boot
/dev/md0 103144736 192116 97713120 1% /data
5.自动挂载,编辑/etc/fstab:
[[email protected] ~]# vi /etc/fstab
6.查看挂载情况:
[[email protected] ~]# mount
/dev/vg1/lv1 on /data type ext4 (rw)
[[email protected] ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd1[3] sdc1[1] sdb1[0]
104790016 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
unused devices: <none>
向/data下新建文件qq1,qq2,qq3
三.模拟硬盘故障:
1.标记/sdb1已经在raid中失效
[[email protected] ~]# mdadm /dev/md0 --fail /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md0
移除坏的硬盘:
[[email protected] ~]# mdadm /dev/md0 --remove /dev/sdb1
mdadm: hot removed /dev/sdb1 from /dev/md0
2.此时查看/data中文件仍然存在,但不能创建文件:
[[email protected] data]# ll
total 16
drwx------ 2 root root 16384 Aug 15 03:33lost+found
-rw-r--r-- 1 root root 0 Aug 15 03:39 qq1
-rw-r--r-- 1 root root 0 Aug 15 03:39 qq2
-rw-r--r-- 1 root root 0 Aug 15 03:39 qq3
[[email protected]]# touch qq4
touch: cannot touch `qq4‘: Read-only filesystem
重启:
[[email protected] ~]# reboot
四.恢复
1.系统启动完成以后,给新硬盘/dev/sdb做与/dev/sdc相同的分区
并格式化sdb1
[[email protected]]# sfdisk -d /dev/sdc | sfdisk /dev/sdb
Disk /dev/sdb: 53.7 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280bytes
Sector size (logical/physical): 512 bytes /512 bytes
I/O size (minimum/optimal): 512 bytes / 512bytes
Disk identifier: 0x2482a65f
2.把/dev/sdb的分区挂到raid1里面
[[email protected]]# mdadm --manage /dev/md0 - -add /dev/sdb1
3.查看结果:
[[email protected] data]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[2] sdc1[1] sdd1[1]
31438720 blocks super 1.2 [3/2] [_U]
[=>...................] recovery = 7.0%(2201920/31438720) finish=2.4min speed=200174K/sec
unused devices: <none>
在日志中看到重建的过程
md: bind<sdb1> md: recovery of RAID array md0 md: minimum _guaranteed_ speed: 1000 KB/sec/disk. md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery. md: using 128k window, over a total of 31438720k. md: md0: recovery done. |
4.查看文件是否丢失
重新挂载/dev/vg1/lv1
[[email protected] data]# mount /dev/vg1/lv1/ /data
[[email protected] data]# ll
total 16
drwx------ 2 root root 16384 Aug 15 03:33lost+found
-rw-r--r-- 1 root root 0 Aug 15 03:39 qq1
-rw-r--r-- 1 root root 0 Aug 15 03:39 qq2
-rw-r--r-- 1 root root 0 Aug 15 03:39 qq3
RHEL 6.4 部署RAID5+LVM,布布扣,bubuko.com