RAID10磁盘阵列损坏的修复 您所在的位置:网站首页 raid5坏一块盘影响磁盘io RAID10磁盘阵列损坏的修复

RAID10磁盘阵列损坏的修复

2024-07-04 02:05| 来源: 网络整理| 查看: 265

RAID10磁盘阵列由RAID1和RAID0组合而成,理论上只要不是RAID1磁盘阵列上的所有硬盘同时损坏,数据就不会丢失。也就是说最多可以在每个RAID1中损坏一个硬盘。

这里修复的实质是:使用新的硬盘代替磁盘阵列中损坏的硬盘,而在磁盘阵列损坏期间,并不影响使用。

1、查看测试的RAID10详细信息

[root@PC1linuxprobe dev]# mdadm -D /dev/md0 ## 查看测试的RAID10磁盘阵列详细信息,一共四块硬盘激活状态 /dev/md0: Version : 1.2 Creation Time : Sun Nov 8 11:17:16 2020 Raid Level : raid10 Array Size : 41909248 (39.97 GiB 42.92 GB) Used Dev Size : 20954624 (19.98 GiB 21.46 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Sun Nov 8 11:20:45 2020 State : active Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 512K Name : PC1linuxprobe:0 (local to host PC1linuxprobe) UUID : 36866db8:2839f737:6831b810:d838b066 Events : 18 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde

 

2、模拟损坏一块硬盘/dev/sdc

[root@PC1linuxprobe dev]# mdadm /dev/md0 -f /dev/sdc mdadm: set /dev/sdc faulty in /dev/md0 [root@PC1linuxprobe dev]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sun Nov 8 11:17:16 2020 Raid Level : raid10 Array Size : 41909248 (39.97 GiB 42.92 GB) Used Dev Size : 20954624 (19.98 GiB 21.46 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Sun Nov 8 11:26:22 2020 State : active, degraded Active Devices : 3 Working Devices : 3 Failed Devices : 1 Spare Devices : 0 Layout : near=2 Chunk Size : 512K Name : PC1linuxprobe:0 (local to host PC1linuxprobe) UUID : 36866db8:2839f737:6831b810:d838b066 Events : 20 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 0 0 1 removed 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde 1 8 32 - faulty /dev/sdc

损坏一块硬盘不影响RAID10磁盘阵列的使用,期间可以在/RAID10目录中创建或者删除文件。

 

3、重启系统(虚拟机)

 

4、查看此时RAID10磁盘阵列的详细信息

[root@PC1linuxprobe dev]# mdadm -D /dev/md0 ## 首先先查看当前RAID10磁盘阵列的详细信息 /dev/md0: Version : 1.2 Creation Time : Sun Nov 8 11:17:16 2020 Raid Level : raid10 Array Size : 41909248 (39.97 GiB 42.92 GB) Used Dev Size : 20954624 (19.98 GiB 21.46 GB) Raid Devices : 4 Total Devices : 3 Persistence : Superblock is persistent Update Time : Sun Nov 8 19:30:18 2020 State : clean, degraded Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 512K Name : PC1linuxprobe:0 (local to host PC1linuxprobe) UUID : 36866db8:2839f737:6831b810:d838b066 Events : 24 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 0 0 1 removed 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde

 

5、卸载

[root@PC1linuxprobe dev]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/rhel-root 18G 2.9G 15G 17% / devtmpfs 985M 0 985M 0% /dev tmpfs 994M 140K 994M 1% /dev/shm tmpfs 994M 8.8M 986M 1% /run tmpfs 994M 0 994M 0% /sys/fs/cgroup /dev/md0 40G 49M 38G 1% /RAID10 /dev/sda1 497M 119M 379M 24% /boot /dev/sr0 3.5G 3.5G 0 100% /run/media/root/RHEL-7.0 Server.x86_64 [root@PC1linuxprobe dev]# umount /RAID10 [root@PC1linuxprobe dev]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/rhel-root 18G 2.9G 15G 17% / devtmpfs 985M 0 985M 0% /dev tmpfs 994M 140K 994M 1% /dev/shm tmpfs 994M 8.8M 986M 1% /run tmpfs 994M 0 994M 0% /sys/fs/cgroup /dev/sda1 497M 119M 379M 24% /boot /dev/sr0 3.5G 3.5G 0 100% /run/media/root/RHEL-7.0 Server.x86_64

 

6、添加新硬盘(顶替损坏的硬盘)

[root@PC1linuxprobe dev]# mdadm /dev/md0 -a /dev/sdc ## 添加新硬盘 mdadm: added /dev/sdc [root@PC1linuxprobe dev]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sun Nov 8 11:17:16 2020 Raid Level : raid10 Array Size : 41909248 (39.97 GiB 42.92 GB) Used Dev Size : 20954624 (19.98 GiB 21.46 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Sun Nov 8 11:37:41 2020 State : clean, degraded, recovering Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : near=2 Chunk Size : 512K Rebuild Status : 16% complete Name : PC1linuxprobe:0 (local to host PC1linuxprobe) UUID : 36866db8:2839f737:6831b810:d838b066 Events : 32 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 4 8 32 1 spare rebuilding /dev/sdc 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde [root@PC1linuxprobe dev]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sun Nov 8 11:17:16 2020 Raid Level : raid10 Array Size : 41909248 (39.97 GiB 42.92 GB) Used Dev Size : 20954624 (19.98 GiB 21.46 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Sun Nov 8 11:38:44 2020 State : clean, degraded, recovering Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : near=2 Chunk Size : 512K Rebuild Status : 74% complete Name : PC1linuxprobe:0 (local to host PC1linuxprobe) UUID : 36866db8:2839f737:6831b810:d838b066 Events : 59 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 4 8 32 1 spare rebuilding /dev/sdc 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde [root@PC1linuxprobe dev]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sun Nov 8 11:17:16 2020 Raid Level : raid10 Array Size : 41909248 (39.97 GiB 42.92 GB) Used Dev Size : 20954624 (19.98 GiB 21.46 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Sun Nov 8 11:39:11 2020 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 512K Name : PC1linuxprobe:0 (local to host PC1linuxprobe) UUID : 36866db8:2839f737:6831b810:d838b066 Events : 69 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 4 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde

 

7、重新挂载

[root@PC1linuxprobe dev]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/rhel-root 18G 2.9G 15G 17% / devtmpfs 985M 0 985M 0% /dev tmpfs 994M 140K 994M 1% /dev/shm tmpfs 994M 8.8M 986M 1% /run tmpfs 994M 0 994M 0% /sys/fs/cgroup /dev/sda1 497M 119M 379M 24% /boot /dev/sr0 3.5G 3.5G 0 100% /run/media/root/RHEL-7.0 Server.x86_64 [root@PC1linuxprobe dev]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Thu Nov 5 15:23:01 2020 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/rhel-root / xfs defaults 1 1 UUID=0ba20ae9-dd51-459f-ac48-7f7e81385eb8 /boot xfs defaults 1 2 /dev/mapper/rhel-swap swap swap defaults 0 0 /dev/md0 /RAID10 ext4 defaults 0 0 [root@PC1linuxprobe dev]# mount -a [root@PC1linuxprobe dev]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/rhel-root 18G 2.9G 15G 17% / devtmpfs 985M 0 985M 0% /dev tmpfs 994M 140K 994M 1% /dev/shm tmpfs 994M 8.8M 986M 1% /run tmpfs 994M 0 994M 0% /sys/fs/cgroup /dev/sda1 497M 119M 379M 24% /boot /dev/sr0 3.5G 3.5G 0 100% /run/media/root/RHEL-7.0 Server.x86_64 /dev/md0 40G 49M 38G 1% /RAID10

 

8、总结RAID10磁盘阵列损坏的修复

首先重启系统,解除挂载 添加新硬盘 mdadm /dev/md0 -a /dev/newdisk 等待rebuilding完成,使用mdadm -D /dev/md0查看进度 重新挂载,mount -a (前提是已经写入开启自动挂载配置文件)


【本文地址】

公司简介

联系我们

今日新闻

    推荐新闻

    专题文章
      CopyRight 2018-2019 实验室设备网 版权所有