centos下软raid的的实现方式

mdadm 模式化的工具
命令的语法格式mdadm [mode] <raiddevice> [options] <component-devices>
支持的RAID级别: LINEAR,RAID0,RAID1,RAID4,RAID5,RAID6,RAID10
主要模式有:
创建 -C -D 查看详细信息
装配 -A
监控 -F
管理 -f,-r,-a
<raiddevice> /dev/md[0..9]
<component-devices> 任意的块设备
-C 创建模式
-n num 使用num个块设备来创建此raid
-l num 指明要创建的RAID的级别
-a {yes|no} 自动创建RIAD设备的设备文件
-c 指明块大小的
-x num 指明空闲盘的个数
实例:创建一个10G可用空间的raid5

[[email protected] ~]# fdisk /dev/sda ##创建4块磁盘用来做软raid**

WARNING: DOS-compatible mode is deprecated. It‘s strongly recommended to
switch off the mode (command ‘c‘) and change display units to
sectors (command ‘u‘).

Command (m for help): m
Command action
a toggle a bootable flag
b edit bsd disklabel
c toggle the dos compatibility flag
d delete a partition
l list known partition types
m print this menu
n add a new partition
o create a new empty DOS partition table
p print the partition table
q quit without saving changes
s create a new empty Sun disklabel
t change a partition‘s system id
u change display/entry units
v verify the partition table
w write table to disk and exit
x extra functionality (experts only)

Command (m for help): l

0 Empty 24 NEC DOS 81 Minix / old Lin bf Solaris
1 FAT12 39 Plan 9 82 Linux swap / So c1 DRDOS/sec (FAT-
2 XENIX root 3c PartitionMagic 83 Linux c4 DRDOS/sec (FAT-
3 XENIX usr 40 Venix 80286 84 OS/2 hidden C: c6 DRDOS/sec (FAT-
4 FAT16 <32M 41 PPC PReP Boot 85 Linux extended c7 Syrinx
5 Extended 42 SFS 86 NTFS volume set da Non-FS data
6 FAT16 4d QNX4.x 87 NTFS volume set db CP/M / CTOS / .
7 HPFS/NTFS 4e QNX4.x 2nd part 88 Linux plaintext de Dell Utility
8 AIX 4f QNX4.x 3rd part 8e Linux LVM df BootIt
9 AIX bootable 50 OnTrack DM 93 Amoeba e1 DOS access
a OS/2 Boot Manag 51 OnTrack DM6 Aux 94 Amoeba BBT e3 DOS R/O
b W95 FAT32 52 CP/M 9f BSD/OS e4 SpeedStor
c W95 FAT32 (LBA) 53 OnTrack DM6 Aux a0 IBM Thinkpad hi eb BeOS fs
e W95 FAT16 (LBA) 54 OnTrackDM6 a5 FreeBSD ee GPT
f W95 Ext‘d (LBA) 55 EZ-Drive a6 OpenBSD ef EFI (FAT-12/16/
10 OPUS 56 Golden Bow a7 NeXTSTEP f0 Linux/PA-RISC b
11 Hidden FAT12 5c Priam Edisk a8 Darwin UFS f1 SpeedStor
12 Compaq diagnost 61 SpeedStor a9 NetBSD f4 SpeedStor
14 Hidden FAT16 <3 63 GNU HURD or Sys ab Darwin boot f2 DOS secondary
16 Hidden FAT16 64 Novell Netware af HFS / HFS+ fb VMware VMFS
17 Hidden HPFS/NTF 65 Novell Netware b7 BSDI fs fc VMware VMKCORE
18 AST SmartSleep 70 DiskSecure Mult b8 BSDI swap fd Linux raid auto
1b Hidden W95 FAT3 75 PC/IX bb Boot Wizard hid fe LANstep
1c Hidden W95 FAT3 80 Old Minix be Solaris boot ff BBT
1e Hidden W95 FAT1
Command (m for help): t
Command (m for help): t
Partition number (1-9): 7
Hex code (type L to list codes): fd
Changed system type of partition 7 to fd (Linux raid autodetect)

Command (m for help): t
Partition number (1-9): 8
Hex code (type L to list codes): fd
Changed system type of partition 8 to fd (Linux raid autodetect)

Command (m for help): t
Partition number (1-9): 9
Hex code (type L to list codes): fd
Changed system type of partition 9 to fd (Linux raid autodetect)

Command (m for help): n
First cylinder (12161-65271, default 12161):
Using default value 12161
Last cylinder, +cylinders or +size{K,M,G} (12161-65271, default 65271): +5G

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
[[email protected] ~]# partx -a /dev/sda ##重读分区表实现内核重读分区表
[[email protected] ~]# cat /proc/mdstat ##查看系统下是否有软raid

Personalities :
unused devices: <none>
[[email protected] ~]# mdadm -C /dev/md0 -a yes -n 3 -x 1 -l 5 /dev/sda{7,8,9,10} ##制作软raid
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
[[email protected] ~]# cat /proc/mdstat ##查看系统下是否有软raid
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sda9[4] sda103 sda8[1] sda7[0]
10482688 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
[==>..................] recovery = 12.9% (678720/5241344) finish=0.7min speed=96960K/sec

unused devices: <none>
[[email protected] ~]# mkfs -t ext4 /dev/md0 ##格式化软raid
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
655360 inodes, 2620672 blocks
131033 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2684354560
80 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 29 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[[email protected] ~]# mkdir /mydata ##创建目录
[[email protected] ~]# mount /dev/md0 /mydata ##将软raid挂载
[[email protected] ~]# mount ##查看系统分区挂载情况

/dev/sda2 on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")
/dev/sda1 on /boot type ext4 (rw)
/dev/sda3 on /data type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
/dev/md0 on /mydata type ext4 (rw)
[[email protected] ~]# df -lh #查看/dev/md0的大小
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 48G 1.7G 44G 4% /
tmpfs 931M 0 931M 0% /dev/shm
/dev/sda1 1.9G 76M 1.8G 5% /boot
/dev/sda3 20G 44M 19G 1% /data
/dev/md0 9.8G 23M 9.2G 1% /mydata
[[email protected] ~]# blkid /dev/md0 #查看/dev/md0的信息
/dev/md0: UUID="b161511b-3570-4874-8c23-b1d2a8d0893b" TYPE="ext4"
dumpe2fs -h /dev/md0
-h only display the superblock information and not any of the block group descriptor detail information.
[[email protected] ~]# dumpe2fs -h /dev/md0 ##查看分区信息
dumpe2fs 1.41.12 (17-May-2010)
Filesystem volume name: <none>
Last mounted on: <not available>
Filesystem UUID: b161511b-3570-4874-8c23-b1d2a8d0893b
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
....
实现/dev/md0的自动挂载。编辑/etc/fstab 添加如下内容
UUID="b161511b-3570-4874-8c23-b1d2a8d0893b" /mydata ext4 defaults 0 0
[[email protected] ~]# mdadm -D /dev/md0 ##查看raid的详细信息
/dev/md0:
Version : 1.2
Creation Time : Sat Feb 24 21:53:39 2018
Raid Level : raid5
Array Size : 10482688 (10.00 GiB 10.73 GB)
Used Dev Size : 5241344 (5.00 GiB 5.37 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent

Update Time : Sat Feb 24 21:56:21 2018
      State : clean 

Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1

     Layout : left-symmetric
 Chunk Size : 512K

       Name : ads3:0  (local to host ads3)
       UUID : a93c9421:a485dac0:04e4a6d4:e17d7116
     Events : 18

Number   Major   Minor   RaidDevice State
   0       8        7        0      active sync   /dev/sda7
   1       8        8        1      active sync   /dev/sda8
   4       8        9        2      active sync   /dev/sda9

   3       8       10        -      spare   /dev/sda10

[[email protected] ~]# mdadm /dev/md0 -f /dev/sda7 ##标记7损坏
mdadm: set /dev/sda7 faulty in /dev/md0
[[email protected] ~]# watch -n1 "cat /proc/mdstat" ##查看同步状态
-bash: whatch: command not found
[[email protected] ~]# watch -n1 "cat /proc/mdstat"

Every 1.0s: cat /proc/mdstat Sat Feb 24 22:08:30 2018

Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sda9[4] sda10[3] sda8[1] sda70
10482688 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]

unused devices: <none>
[[email protected] ~]# mdadm -D /dev/md0 ##查看raid的详细信息
/dev/md0:
Version : 1.2
Creation Time : Sat Feb 24 21:53:39 2018
Raid Level : raid5
Array Size : 10482688 (10.00 GiB 10.73 GB)
Used Dev Size : 5241344 (5.00 GiB 5.37 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent

Update Time : Sat Feb 24 22:07:11 2018
      State : clean 

Active Devices : 3
Working Devices : 3
Failed Devices : 1
Spare Devices : 0

     Layout : left-symmetric
 Chunk Size : 512K

       Name : ads3:0  (local to host ads3)
       UUID : a93c9421:a485dac0:04e4a6d4:e17d7116
     Events : 37

Number   Major   Minor   RaidDevice State
   3       8       10        0      active sync   /dev/sda10
   1       8        8        1      active sync   /dev/sda8
   4       8        9        2      active sync   /dev/sda9

   0       8        7        -      faulty   /dev/sda7

**[[email protected] ~]# cp /etc/fstab /mydata ##复制文件到mydata目录下
[[email protected] ~]# cat /mydata/fstab ##mydata目录下的文件内容

/etc/fstab
Created by anaconda on Fri Feb 23 10:54:44 2018

Accessible filesystems, by reference, are maintained under ‘/dev/disk‘
See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info

UUID=af25ecdc-82ea-43ec-89ca-9ae5b12109b9 / ext4 defaults 1 1
UUID=9e3fa0e8-6130-49a1-a2a3-fafa6ac0cfeb /boot ext4 defaults 1 2
UUID=0b8c477f-3d8e-4a24-88e8-a4b0524e1101 /data ext4 defaults 1 2
UUID=af38e7c0-6524-4d7b-ab20-f6faed3be199 swap swap defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
[[email protected] ~]# mdadm /dev/md0 -f /dev/sda8 ##标记8损坏
mdadm: set /dev/sda8 faulty in /dev/md0
[[email protected] ~]# mdadm -D /dev/md0 ##查看raid的详细信息
/dev/md0:
Version : 1.2
Creation Time : Sat Feb 24 21:53:39 2018
Raid Level : raid5
Array Size : 10482688 (10.00 GiB 10.73 GB)
Used Dev Size : 5241344 (5.00 GiB 5.37 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent

Update Time : Sat Feb 24 22:16:11 2018
      State : clean, degraded 

Active Devices : 2
Working Devices : 2
Failed Devices : 2
Spare Devices : 0

     Layout : left-symmetric
 Chunk Size : 512K

       Name : ads3:0  (local to host ads3)
       UUID : a93c9421:a485dac0:04e4a6d4:e17d7116
     Events : 41

Number   Major   Minor   RaidDevice State
   3       8       10        0      active sync   /dev/sda10
   2       0        0        2      removed
   4       8        9        2      active sync   /dev/sda9

   0       8        7        -      faulty   /dev/sda7
   1       8        8        -      faulty   /dev/sda8

[[email protected] ~]# mdadm /dev/md0 -r /dev/sda7 ##移除7号盘
mdadm: hot removed /dev/sda7 from /dev/md0
[[email protected] ~]# mdadm /dev/md0 -r /dev/sda8 ##移除8号盘
mdadm: hot removed /dev/sda8 from /dev/md0
[[email protected] ~]# mdadm -D /dev/md0 ##查看raid的详细信息
/dev/md0:
Version : 1.2
Creation Time : Sat Feb 24 21:53:39 2018
Raid Level : raid5
Array Size : 10482688 (10.00 GiB 10.73 GB)
Used Dev Size : 5241344 (5.00 GiB 5.37 GB)
Raid Devices : 3
Total Devices : 2
Persistence : Superblock is persistent

Update Time : Sat Feb 24 22:18:09 2018
      State : clean, degraded 

Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

     Layout : left-symmetric
 Chunk Size : 512K

       Name : ads3:0  (local to host ads3)
       UUID : a93c9421:a485dac0:04e4a6d4:e17d7116
     Events : 43

Number   Major   Minor   RaidDevice State
   3       8       10        0      active sync   /dev/sda10
   2       0        0        2      removed
   4       8        9        2      active sync   /dev/sda9

[[email protected] ~]# watch -n1 "cat /proc/mdstat" ##查看同步的详细信息
查看每秒钟同步恢复 的状态
Every 1.0s: cat /proc/mdstat Sat Feb 24 22:21:02 2018

Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sda86 sda7[5] sda9[4] sda10[3]
10482688 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [U_U]
[===========>.........] recovery = 59.2% (3107360/5241344) finish=0.3min speed=93006K/sec

unused devices: <none>

[[email protected] ~]# mdadm -D /dev/md0 ##查看raid的详细信息
/dev/md0:
Version : 1.2
Creation Time : Sat Feb 24 21:53:39 2018
Raid Level : raid5
Array Size : 10482688 (10.00 GiB 10.73 GB)
Used Dev Size : 5241344 (5.00 GiB 5.37 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent

Update Time : Sat Feb 24 22:21:25 2018
      State : clean 

Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1

     Layout : left-symmetric
 Chunk Size : 512K

       Name : ads3:0  (local to host ads3)
       UUID : a93c9421:a485dac0:04e4a6d4:e17d7116
     Events : 64

Number   Major   Minor   RaidDevice State
   3       8       10        0      active sync   /dev/sda10
   5       8        7        1      active sync   /dev/sda7
   4       8        9        2      active sync   /dev/sda9

   6       8        8        -      spare   /dev/sda8

原文地址:http://blog.51cto.com/30bear/2072752

时间: 2024-10-08 01:53:22

centos下软raid的的实现方式的相关文章

CentOS下软raid和lvm结合

一.添加三块10G硬盘 [[email protected] ~]# fdisk -l |grep sd 磁盘 /dev/sda:53.7 GB, 53687091200 字节,104857600 个扇区 /dev/sda1 * 2048 2099199 1048576 83 Linux /dev/sda2 2099200 104857599 51379200 8e Linux LVM 磁盘 /dev/sdb:10.7 GB, 10737418240 字节,20971520 个扇区 磁盘 /de

Centos 6下软raid操作脚本

#!/bin/sh main () { clear echo '          ------------------------------------------------------          ' echo '          1.Create a software raid array          ' echo '          2.View raid array sync status           ' echo '          3.View rai

centOS下 JDK的三种安装方式

由于各Linux开发厂商的不同,因此不同开发厂商的Linux版本操作细节也不一样,今天就来说一下CentOS下JDK的安装: 方法一:手动解压JDK的压缩包,然后设置环境变量 1.在/usr/目录下创建java目录 [[email protected] ~]# mkdir/usr/java[[email protected] ~]# cd /usr/java 2.下载jdk,然后解压 [[email protected] java]# curl -O http://download.Oracle

linux下软RAID的实现

RAID: 常见的RAID级别: RAID 0: 工作模式:先将数据进行条带化,分别存放至硬盘中. 空间利用率:100% 是否支持冗余:否 性能:磁盘IO提高(取决于磁盘的数量) 至少需要2块磁盘 RAID 1: 工作模式:有一个磁盘为镜像盘 空间利用率:1/2 是否支持冗余:是 性能:写性能下降,读性能不变 至少需要2块磁盘,且只能坏1快磁盘 RAID 5: 工作模式:有一块盘为校验盘,存放校验数据 空间利用率:n-1/n 是否支持冗余:是 性能:读性能下降,写性能不变 至少需要3块盘,且只能

物理服务器Linux下软RAID和UUID方式挂载方法--Megacli64

一.业务部门需求说明:公司最近来了一批服务器,用于大数据业务部署.数据节点服务器由14块物理磁盘,其中有2块是900G的盘,12块是4T的盘.在服务器系统安装时,进入系统的BIOS界面:1)将2块900G的磁盘做成raid1用作系统盘:2)将其中的2块4T的磁盘做成raid1,分别挂载到/data1和/data2用作大数据日志存储:3)另外的10块4T的磁盘在系统安装时没做raid也没做分区,打算在系统安装后,登录到系统终端通过命令行进行直接进行10块盘的格式化,并分别挂载到/data3./da

Centos 6 软Raid创建与管理

实验系统环境 实验环境:VMware Workstation Pro 14(试用版) 系统平台: CentOS release 6.9 (Final)       内核  2.6.32-696.el6.x86_64 mdadm 版本: CentOS6:mdadm-3.3.4-8.el6.x86_64 磁盘规划如下: 磁盘编号 分区1 分区2 分区格式 69-1G-1 500M 未分区 ext4 69-1G-2 1GB ext4 69-1G-3 300M 700M ext4 69-1G-4 未分区

CentOS配置软Raid

安装dmadmfdisk -l查看磁盘信息,6块100GB磁盘,分别为/dev/sdb./dev/sdc./dev/sdd./dev/sde./dev/sdf./dev/sdg创建raid5,两块热备盘-C 创建Raid-v 显示过程-n 指定Raid盘个数-l 指定Raid级别-x 指定热备盘个数使用mdadm -D /dev/md0查看Raid信息,Raid正在rebuilding大约3分钟使用mdadm -D /dev/md0查看Raid信息,Raid初始化完成格式化挂载分区模拟故障/de

CentOS下的几种软件安装方式

1. rpm包 安装: rpm -ivh soft.version.rpm 更新: rpm -Uvh soft.version.rpm 卸载: 1) 查找欲卸载的软件包 rpm -qa | grep  XXX 2) 例如找到软件mysql-4.1.22-2.el4_8.4 ,执行rpm -e mysql-4.1.22-2.el4_8.4 查询软件的安装目录: rpm -ql mysql-4.1.22-2.el4_8.4 2. 以.bin结尾的安装包 安装: chmod +x ******.bin

Linux下实现RAID

一.实验目的 1.掌握Linux系统下软RAID的实现方法: 2.掌握RAID5的配置过程: 3. 通过实验熟悉RAID.5的特点. 二.实验内容及步骤 1.在VMware中创建一台Linux. 2.将该Linux添加4块虚拟磁盘(选择SCSI类型,2G). 3.在Linux中使用madam创建RAID5,三块磁盘做RAID5,一块磁盘做备盘. 4.格式化并挂载RAID5. 5.在RAID5中创建一些文件和文件夹以便故障检测用. 6.修改配置文件,让RAID5开机自动加载. 7.关闭系统. 8.