raid5+lvm实验

实验目的:

1。将sdb,sdc,sdd3块硬盘组成raid5模式

2。建立LVM

3。模拟故障,sdc出故障,删除该硬盘,再重新添加硬盘,恢复raid5

4。增加LVM容量

实验步骤

1,格式化3块硬盘 
[[email protected] ~]# fdisk /dev/sdb  //格式化/dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSFdisklabel 
Building a new DOS disklabel. Changes will remain in memory only, 
until you decide to write them. After that, of course, the previous 
content won‘t be recoverable.

Warning: invalid flag 0x0000 ofpartition table 4 will be corrected by w(rite)

Command (m for help): n  //增加一个新的分区
Command action 
   e   extended 
   p   primary partition (1-4) 
p  //创建主分区
Partition number (1-4): 1 //分区号为1
First cylinder (1-130, default 1):  //设置分区大小
Using default value 1 
Last cylinder or +size or +sizeM or +sizeK (1-130, default 130): 
Using default value 130

Command (m for help): t   //改变系统的ID
Selectedpartition 1 
Hex code (type L to list codes): fd  //设置系统HEX为fd,即raid模式
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): w  //保存
The partition table has been altered!

Calling ioctl() to re-read partitiontable. 
Syncing disks. 
[[email protected] ~]# fdisk /dev/sdc //格式化/dev/sdc

Device contains neither a valid DOSpartition table, nor Sun, SGI or OSF disklabel 
Building a new DOS disklabel. Changes will remain in memory only, 
until you decide to write them. After that, of course, the previous 
content won‘t be recoverable.

Warning: invalid flag 0x0000 of partitiontable 4 will be corrected by w(rite)

Command (m for help): 
Commandaction 
   e   extended 
   p   primary partition (1-4) 

Partitionnumber (1-4): 1 
First cylinder (1-130, default 1): 
Using default value 1 
Last cylinder or +size or +sizeM or +sizeK (1-130, default 130): 
Using default value 130

Command (m for help): 
Selectedpartition 1 
Hex code (type L to list codes): fd 
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): w 
The partition table has been altered!

Calling ioctl() to re-read partitiontable. 
Syncing disks. 
[[email protected] ~]# fdisk /dev/sdd  //格式化/dev/sdd
Device contains neither a valid DOS partition table, nor Sun, SGI or OSFdisklabel 
Building a new DOS disklabel. Changes will remain in memory only, 
until you decide to write them. After that, of course, the previous 
content won‘t be recoverable.

Warning: invalid flag 0x0000 ofpartition table 4 will be corrected by w(rite)

Command (m for help): n 
Command action 
   e   extended 
   p   primary partition (1-4) 
p 
Partition number (1-4): 1 
Firstcylinder (1-130, default 1): 
Using default value 1 
Last cylinder or +size or +sizeM or +sizeK (1-130, default 130): 
Using default value 130

Command (m for help): t 
Selectedpartition 1 
Hex code (type L to list codes): fd 
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): w 
The partition table has been altered!

Calling ioctl() to re-read partitiontable. 
Syncing disks. 
[[email protected] ~]# fdisk -l  //查看分区情况

Disk /dev/sda: 8589 MB, 8589934592bytes 
255 heads, 63 sectors/track, 1044 cylinders 
Units = cylinders of 16065 * 512 = 8225280 bytes

DeviceBoot     Start        End      Blocks   Id  System 
/dev/sda1  *          1         13      104391   83  Linux 
/dev/sda2             14         268    2048287+  83  Linux 
/dev/sda3            269         395    1020127+  83  Linux 
/dev/sda4            396        1044    5213092+   5  Extended 
/dev/sda5            396         522    1020096   82  Linux swap / Solaris 
/dev/sda6            523        1044    4192933+  83  Linux

Disk /dev/sdb: 1073 MB, 1073741824bytes 
255 heads, 63 sectors/track, 130 cylinders 
Units = cylinders of 16065 * 512 = 8225280 bytes

DeviceBoot      Start        End      Blocks   Id  System 
/dev/sdb1              1         130    1044193+  fd  Linux raid autodetect

Disk /dev/sdc: 1073 MB, 1073741824bytes 
255 heads, 63 sectors/track, 130 cylinders 
Units = cylinders of 16065 * 512 = 8225280 bytes

DeviceBoot     Start        End      Blocks   Id  System 
/dev/sdc1              1         130    1044193+  fd  Linux raid autodetect

Disk /dev/sdd: 1073 MB, 1073741824bytes 
255 heads, 63 sectors/track, 130 cylinders 
Units = cylinders of 16065 * 512 = 8225280 bytes

DeviceBoot     Start        End      Blocks   Id  System 
/dev/sdd1              1         130    1044193+  fd  Linux raid autodetect

建立raid5
[[email protected] ~]# mdadm --create /dev/md0 --level=5 --raid-device=3/dev/sdb1 /d 
ev/sdc1 /dev/sdd1
  //创建raid5,/dev/md0阵列设备名,level=5阵列模式raid5,raid-device=3raid有3块硬盘
mdadm: array /dev/md0 started. 
[[email protected] ~]# cat /proc/mdstat  //查看数据同步情况
Personalities : [raid6] [raid5] [raid4] 
md0 : active raid5 sdd1[3] sdc1[1] sdb1[0] 
      2088192 blocks level 5, 64k chunk, algorithm 2[3/2] [UU_] 
      [========>............]  recovery =40.0% (419060/1044096) finish=1.4min speed=7423K/sec 
unused devices: <none> 
[[email protected] ~]# cat /proc/mdstat   //看到这个说明同步结束
Personalities : [raid6] [raid5] [raid4] 
md0 : active raid5 sdd1[2] sdc1[1] sdb1[0] 
      2088192 blocks level 5, 64k chunk, algorithm 2[3/3] [UUU] 
unused devices: <none> 
[[email protected] ~]# tail /var/log/messages  //查看系统日志
Jun  1 11:34:10 RHEL5-1 kernel: md: syncing RAID array md0 
Jun  1 11:34:13 RHEL5-1 kernel: md: minimum _guaranteed_ reconstructionspeed: 1000 KB/sec/disc. 
Jun  1 11:34:15 RHEL5-1 kernel: md: using maximum available idle IObandwidth (but not more than 200000 KB/sec) for reconstruction. 
Jun  1 11:34:19 RHEL5-1 kernel: md: using 128k window, over a total of1044096 blocks. 
Jun  1 11:36:01 RHEL5-1 kernel: md: md0: sync done. 
Jun  1 11:36:01 RHEL5-1 kernel: RAID5 conf printout: 
Jun  1 11:36:01 RHEL5-1 kernel:  --- rd:3 wd:3 fd:0 
Jun  1 11:36:01 RHEL5-1 kernel:  disk 0, o:1, dev:sdb1 
Jun  1 11:36:01 RHEL5-1 kernel:  disk 1, o:1, dev:sdc1 
Jun  1 11:36:01 RHEL5-1 kernel:  disk 2, o:1, dev:sdd1

建立raid5的配置文件
[[email protected] ~]# echo device /dev/sdb1 /dev/sdc1 /dev/sdd1 &gt;/etc/mdadm.conf 
[[email protected] ~]# mdadm --detail --scan >> /etc/mdadm.conf 
[[email protected] ~]# cat /etc/mdadm.conf 
device /dev/sdb1 /dev/sdc1 /dev/sdd1 
ARRAY /dev/md0 level=raid5 num-devices=3UUID=36f261b7:4899a54c:9edf36d1:9eb86529 
[[email protected] ~]# mdadm -S /dev/md0  //停止阵列
mdadm: stopped /dev/md0 
[[email protected] ~]# mdadm -As /dev/md0  //启动阵列
mdadm: /dev/md0 has been started with 3 drives.

2 建立LVM
[[email protected] ~]# pvcreate /dev/md0 //建立PV
  Physical volume "/dev/md0" successfully created 
[[email protected] ~]# vgcreate lvm1 /dev/md0  //建立VG
  Volume group "lvm1" successfully created 
[[email protected] ~]# vgdisplay  //查看VG
  --- Volume group--- 
  VGName              lvm1 
  SystemID             
 Format               lvm2 
  Metadata Areas        1 
  Metadata Sequence No  1 
  VGAccess            read/write 
  VGStatus            resizable 
  MAXLV               0 
  CurLV               0 
  OpenLV              0 
  MaxPV               0 
  CurPV               1 
  ActPV               1 
  VGSize              1.99 GB 
  PE Size              4.00 MB 
  TotalPE             509 
  Alloc PE / Size       0 /0   
  Free  PE / Size       509 / 1.99GB 
  VGUUID              h7d74U-S38z-rQrw-ecGG-ePlg-48b5-87sbC1 
[[email protected] ~]# lvcreate -L 500m -n web1 lvm1 //建立LV ,名为web1,大小为500M
  Logical volume "web1" created 
[[email protected] ~]# lvcreate -L 500m -n web2 lvm1  //建立LV ,名为web2,大小为500M
Logicalvolume "web2" created 
[[email protected] ~]# mke2fs -j /dev/lvm1/web1  //格式化web1
mke2fs 1.39 (29-May-2006) 
Filesystem label= 
OS type: Linux 
Block size=1024 (log=0) 
Fragment size=1024 (log=0) 
128016 inodes, 512000 blocks 
25600 blocks (5.00%) reserved for the super user 
First data block=1 
Maximum filesystem blocks=67633152 
63 block groups 
8192 blocks per group, 8192 fragments per group 
2032 inodes per group 
Superblock backups stored on blocks: 
        8193, 24577, 40961, 57345, 73729,204801, 221185, 401409

Writing inode tables:done                            
Creating journal (8192 blocks): done 
Writing superblocks and filesystem accounting information: done

This filesystem will be automaticallychecked every 26 mounts or 
180 days, whichever comes first.  Use tune2fs -c or -i to override. 
[[email protected] ~]# mke2fs -j /dev/lvm1/web2  //格式化web1
mke2fs1.39 (29-May-2006) 
Filesystem label= 
OS type: Linux 
Block size=1024 (log=0) 
Fragment size=1024 (log=0) 
128016 inodes, 512000 blocks 
25600 blocks (5.00%) reserved for the super user 
First data block=1 
Maximum filesystem blocks=67633152 
63 block groups 
8192 blocks per group, 8192 fragments per group 
2032 inodes per group 
Superblock backups stored on blocks: 
        8193, 24577, 40961, 57345, 73729,204801, 221185, 401409

Writing inode tables:done                            
Creating journal (8192 blocks): done 
Writing superblocks and filesystem accounting information: done

This filesystem will be automaticallychecked every 39 mounts or 
180 days, whichever comes first.  Use tune2fs -c or -i to override. 
[[email protected] ~]# mkdir /web1 
[[email protected] ~]# mkdir /web2 
[[email protected] ~]# mount /dev/lvm1/web1 /web1    //挂载
[[email protected] ~]# mount /dev/lvm1/web2 /web2 
[[email protected] ~]# vi /etc/fstab  //编辑/etc/fstab,让系统启动时自动挂载
LABEL=/                /                      ext3    defaults        11 
LABEL=/var             /var                   ext3    defaults        12 
LABEL=/tmp             /tmp                   ext3    defaults        12 
LABEL=/boot            /boot                  ext3    defaults        12 
tmpfs                  /dev/shm               tmpfs   defaults        0 0 
devpts                 /dev/pts               devpts  gid=5,mode=620  0 0 
sysfs                  /sys                   sysfs   defaults        0 0 
proc                   /proc                  proc    defaults        00 
LABEL=SWAP-sda5        swap                   swap    defaults        00 
/dev/lvm1/web1         /web1                  ext3    defaults        00 
/dev/lvm1/web2         /web2                  ext3    defaults        00 
[[email protected]~]# reboot

查看/web1的容量
[[email protected] ~]# df -h /web1 
Filesystem           Size  Used Avail Use% Mounted on 
/dev/mapper/lvm1-web1 
                     485M   11M  449M   3% /web1 
[[email protected] ~]# pvdisplay 
  --- Physical volume --- 
  PVName              /dev/md0 
  VG Name              lvm1 
  PVSize              1.99 GB / not usable 3.25 MB 
  Allocatable          yes 
  PE Size (KByte)       4096 
  TotalPE             509 
  FreePE              259 
  Allocated PE         250 
  PVUUID              HSyAfx-Qxdv-b6id-01sZ-eRVC-HWAj-By3ctA 
3模拟故障
[[email protected] ~]# mdadm /dev/md0 -f /dev/sdc1  //标记/dev/sdc1为故障盘
mdadm: set /dev/sdc1 faulty in /dev/md0 
[[email protected] ~]# more /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] 
md0 : active raid5 sdd1[2] sdc1[3](F) sdb1[0]     //[F]表示为故障盘
      2088192 blocks level 5, 64k chunk, algorithm 2[3/2] [U_U] 
unused devices: <none> 
[[email protected] ~]# mdadm /dev/md0 -r /dev/sdc1   //移除故障盘
mdadm: hot removed /dev/sdc1 
[[email protected] ~]# more /proc/mdstat    //查看阵列情况
Personalities : [raid6] [raid5] [raid4] 
md0 : active raid5 sdd1[2] sdb1[0] 
      2088192 blocks level 5, 64k chunk, algorithm 2[3/2] [U_U] 
unused devices: <none> 
[[email protected] ~]# pvdisplay /dev/md0   //查看PV情况,发现容量没减少
  --- Physical volume --- 
  PV Name              /dev/md0 
  VGName              lvm1 
  PVSize              1.99 GB / not usable 3.25 MB 
  Allocatable          yes 
  PE Size (KByte)       4096 
  TotalPE             509 
  FreePE              259 
  Allocated PE         250 
  PVUUID              HSyAfx-Qxdv-b6id-01sZ-eRVC-HWAj-By3ctA 
[[email protected] ~]# fdisk /dev/sdc   //重新格式化sdc,重新添加进阵列

Command (m for help): 
Command action 
   e   extended 
   p   primary partition (1-4) 
p 
Partition number (1-4): 1 
Partition 1 is already defined.  Delete it before re-adding it.

Command (m for help): 
Selectedpartition 1 
Hex code (type L to list codes): fd

Command (m for help): w 
The partition table has been altered!

Calling ioctl() to re-read partitiontable. 
Syncing disks. 
[[email protected] ~]# mdadm /dev/md0 -a /dev/sdc1   //增加
mdadm: re-added /dev/sdc1 
[[email protected] ~]# more /proc/mdstat    //开始同步数据
Personalities : [raid6] [raid5] [raid4]   
md0 : active raid5 sdc1[1] sdd1[2] sdb1[0] 
      2088192 blocks level 5, 64k chunk, algorithm 2[3/2] [U_U] 
      [==&gt;..................]  recovery =12.2% (128796/1044096) finish=2.1min s 
peed=7155K/sec 
unused devices: <none> 
[[email protected] ~]# pvdisplay 
  --- Physical volume --- 
  PVName              /dev/md0 
  VG Name              lvm1 
  PVSize              1.99 GB / not usable 3.25 MB 
  Allocatable          yes 
  PE Size (KByte)       4096 
  TotalPE             509 
  FreePE              259 
  Allocated PE         250 
  PVUUID              HSyAfx-Qxdv-b6id-01sZ-eRVC-HWAj-By3ctA 
[[email protected] ~]# vgdisplay lvm1 
  --- Volume group --- 
  VGName              lvm1 
  SystemID             
 Format               lvm2 
  Metadata Areas        1 
  Metadata Sequence No  3 
  VGAccess            read/write 
  VGStatus            resizable 
  MAXLV               0 
  CurLV               2 
  OpenLV              2 
  MaxPV               0 
  CurPV               1 
  ActPV               1 
  VGSize              1.99 GB 
  PESize              4.00 MB 
  TotalPE             509 
  Alloc PE / Size       250 / 1000.00MB 
  Free  PE / Size       259 / 1.01GB 
  VGUUID              h7d74U-S38z-rQrw-ecGG-ePlg-48b5-87sbC1 
[[email protected] ~]# df -h /web1 
Filesystem           Size  Used Avail Use% Mounted on 
/dev/mapper/lvm1-web1 
                     485M   11M  449M   3% /web1

4。增加LVM容量
[[email protected] ~]# lvextend -L +50M /dev/lvm1/web1   //增加web1 50M
  Rounding up size to full physical extent 52.00 MB 
  Extending logical volume web1 to 552.00 MB 
  Logical volume web1 successfully resized 
[[email protected] ~]# resize2fs /dev/lvm1/web1   //刷新
resize2fs 1.39 (29-May-2006) 
Filesystem at /dev/lvm1/web1 is mounted on /web1; on-line resizingrequired 
Performing an on-line resize of /dev/lvm1/web1 to 565248 (1k) blocks. 
The filesystem on /dev/lvm1/web1 is now 565248 blocks long.

[[email protected] ~]# df -h /web1   //查看
Filesystem           Size  Used Avail Use% Mounted on 
/dev/mapper/lvm1-web1 
                     535M   11M  498M   3% /web1

时间: 2024-10-16 05:13:43

raid5+lvm实验的相关文章

Linux 红帽 &nbsp; 磁盘管理~~~~RAID5+LVM

Linux  红帽    磁盘管理~~~~RAID5+LVM 实验环境: 在Linux 操作系统的PC机上添加3块20G大小的磁盘 实验步骤: 一.  创建RAID5: 1. 准备工作: [[email protected] ~]# uname -a              //查看基本信息 [[email protected] ~]# cat /etc/redhat-release        //查看红帽的发行版本 为添加好的三个磁盘进行分区, 分区的类型:FD(全称:)    为了节省

RHEL 6.4 部署RAID5+LVM

POC环境: 实验环境须添加3块磁盘 一.创建RAID5操作: [[email protected] ~]# uname -a Linux localhost.localdomain2.6.32-358.el6.x86_64 #1 SMP Tue Jan 29 11:47:41 EST 2013 x86_64 x86_64 x86_64GNU/Linux [[email protected] ~]# cat /etc/redhat-release Red Hat Enterprise Linux

RAID5 模拟实验

实验题目: 创建一个可用空间为15g的RAID5设备, 开机自动挂载到/mnt/mydata 下面是需要用到的命令: Centos6 上的软件RAID的实现: 结合内核中的md(multi devices) 命令的语法格式: mdadm [mode] <raiddevice> [options> <component-devices> 支持的RAID级别: LINEAR, RAID0, RAID1, RAID4, RAID5, RAID6, RAID10 模式: 创建: -C

LVM实验分享和文本处理工具sed

LVM 逻辑卷管理 PV(物理卷) PV物理卷就是指硬盘分区或从逻辑上与磁盘分区具有同样功能的设备不如RAID,是LVM的基本存储逻辑块,但和基本的物理存储介质(如分区磁盘等)比较,却包含与LVM相关的管理参数. VG(卷组) LVM卷组类似与非LVM系统中的物理硬盘,其由物理卷组成,可以在卷组上创建一个或多个"LVM分区"(逻辑卷),LVM逻辑卷是由一个或多个物理卷组成 LV(逻辑卷) LVM的逻辑卷类似于非LVM系统中的硬盘分区,在逻辑卷之上可以建立文件系统. 总的来说LVM就是将

raid0和raid5的 实验过程

raid:独立的磁盘冗余阵列 创建raid0: 环境准备:准备三块大小相同的磁盘或分区,此处要特别注意:红色字体 [[email protected] home]#fdisk /dev/sdd ##对/dev/sdd磁盘进行分区 Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel with disk identifier 0x

LVM+RAID5

桌面 环境 [[email protected] ~]# uname -a Linux localhost.localdomain 3.10.0-123.el7.x86_64 #1 SMP Mon Jun 30 12:09:22 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux [[email protected] ~]# cat /etc/redhat-release CentOS Linux release 7.0.1406 (Core) [[email pro

CentOS 6.3下配置LVM(逻辑卷管理)

一.简介 LVM是逻辑盘卷管理(Logical Volume Manager)的简称,它是Linux环境下对磁盘分区进行管理的一种机制,LVM是建立在硬盘和分区之上的一个逻辑层,来提高磁盘分区管理的灵活性. LVM的工作原理其实很简单,它就是通过将底层的物理硬盘抽象的封装起来,然后以逻辑卷的方式呈现给上层应用.在传统的磁盘管理机制中,我们的上层应用是直接访问文件系统,从而对底层的物理硬盘进行读取,而在LVM中,其通过对底层的硬盘进行封装,当我们对底层的物理硬盘进行操作时,其不再是针对于分区进行操

LVS(DR)+keepalived+nfs+raid+LVM

LVS理论篇 1.Client 向目标VIP 发出请求,Director(负载均衡器)接收.此时IP 包头及数据帧信息为:       2.Director 根据负载均衡算法选择RealServer_1,不修改也不封装IP 报文,而是将数据帧的MAC 地址改为RealServer_1 的MAC 地址,然后在局域网上发送.IP 包头及数据帧头信息如下:   3.RealServer_1 收到这个帧,解封装后发现目标IP 与本机匹配(RealServer 事先绑定了VIP,必须的!)于是处理这个报文

CentOS 6.3下配置LVM(逻辑卷管理

CentOS 6.3下配置LVM(逻辑卷管理) 一.简介 LVM是逻辑盘卷管理(Logical Volume Manager)的简称,它是Linux环境下对磁盘分区进行管理的一种机制,LVM是建立在硬盘和分区之上的一个逻辑层,来提高磁盘分区管理的灵活性. LVM的工作原理其实很简单,它就是通过将底层的物理硬盘抽象的封装起来,然后以逻辑卷的方式呈现给上层应用.在传统的磁盘管理机制中,我们的上层 应用是直接访问文件系统,从而对底层的物理硬盘进行读取,而在LVM中,其通过对底层的硬盘进行封装,当我们对