You are not logged in or registered. Please login or register to use the full functionality of this board...

Post Reply 
 
Thread Rating:
  • 1 Votes - 5 Average
  • 1
  • 2
  • 3
  • 4
  • 5
How to replace failed or failing drive in linux RAID1 array
10-14-2011, 01:47 PM (This post was last modified: 04-10-2012 04:18 PM by knifebunny.)
Post: #1
How to replace failed or failing drive in linux RAID1 array
A drive has failed in my raid 1 configuration, and I need to replace it with a new drive.

Use mdadm to fail the drives partition(s) and remove it from the RAID array.

Physically add the new drive to the system and remove the old drive.

Create the same partitioning tables on the new drive that existed on the old drive.

Add the drive partition(s) back into the RAID array.

In this example I have two drives /dev/sda and /dev/sdb. Each drive has 5 partitions and each partition is configured into a RAID 1 array denoted by md#. We will assume that /dev/sdb has failed and that hard drive needs to be replaced.

Note that in Linux Software RAID you can create RAID Arrays by mirroring partitions and not entire disks.

Fail and Remove the failed partitions and disk:

Identify which RAID Arrays have failed:

To identify if a RAID Array has failed look at the string containing [UU]. Each “U” represents an healthy partition in the RAID Array. If you see [UU] then the RAID Array is healthy. If you see a missing “U” like [_U] then the RAID Array is degraded or faulty.

Quote:$ cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda1[0] sdb1[1]
104320 blocks [2/1] [UU]

md2 : active raid1 sda3[0] sdb3[1]
2048192 blocks [2/2] [UU]

md3 : active raid1 sda5[0]
2048192 blocks [2/2] [_U]

md4 : active raid1 sda6[0] sdb6[1]
2048192 blocks [2/2] [UU]

md5 : active raid1 sda7[0] sdb7[1]
960269184 blocks [2/2] [UU]

md1 : active raid1 sda2[0] sdb2[1]
10241344 blocks [2/2] [UU]

From the above out put we can see that RAID Array “md3″ is missing a “U” and is degraded or faulty.

Removing the failed partition(s) and disk:

Before we can physically remove the hard drive from the system we must first “fail” the disks partition(s) from all RAID Arrays that they belong to. Even though only partition /dev/sdb5 or RAID Array md3 has failed, we must manually fail all the other /dev/sdb# partitions that belong to RAID Arrays, before we can remove the hard drive from the system.

To fail the partition we issue the following command:
Quote:# mdadm --manage /dev/md0 --fail /dev/sdb1

Repeat this command for each partition changing /dev/md# and /dev/sdb# to match the output from “cat /proc/mdstat”
Quote:# mdadm --manage /dev/md1 --fail /dev/sdb2

Removing:

Now that all the partitions are failed we can remove then from the RAID Arrays.

Quote:# mdadm --manage /dev/md0 --remove /dev/sdb1

Repeat this command for each partition changing /dev/md# and /dev/sdb# to match the output from “cat /proc/mdstat”

Quote:# mdadm --manage /dev/md1 --remove /dev/sdb2

Power off the system and physically replace the hard drive:

Quote:# shutdown -h now

Adding the new disk to the RAID Array:

Now that the new hard drive has been physically installed we can add it to the RAID Array.

In order to use the new drive we must create the exact same partition table structure that was on the old drive.

We can use the existing drive and mirror its partition table structure to the new drive. There is an easy command to do this:

Quote:# sfdisk -d /dev/sda | sfdisk /dev/sdb

* Note that sometimes when removing drives and replacing them the drives device name may change. Make sure the drive you replaced is listed as /dev/sdb, by issueing command “fdisk -l /dev/sdb” and no partitions exist.

Add the partitions back into the RAID Arrays:

Now that the partitions are configured on the newly installed hard drive, we can add the partitions to the RAID Array.

Quote:# # mdadm --manage /dev/md0 --add /dev/sdb1
mdadm: added /dev/sdb1

Repeat this command for each partition changing /dev/md# and /dev/sdb#

Quote:# mdadm --manage /dev/md1 --add /dev/sdb2
mdadm: added /dev/sdb2

Now we can check that the partitions are being synchronized by issuing:

Quote:# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda1[0] sdb1[1]
104320 blocks [2/2] [UU]

md2 : active raid1 sda3[2] sdb3[1]
2048192 blocks [2/1] [_U]
resync=DELAYED

md3 : active raid1 sda5[2] sdb5[1]
2048192 blocks [2/1] [_U]
resync=DELAYED

md4 : active raid1 sda6[2] sdb6[1]
2048192 blocks [2/1] [_U]
resync=DELAYED

md5 : active raid1 sda7[2] sdb7[1]
960269184 blocks [2/1] [_U]
[>....................] recovery = 1.8% (17917184/960269184) finish=193.6min speed=81086K/sec

md1 : active raid1 sda2[0] sdb2[1]
10241344 blocks [2/2] [UU]

Once all drives have synchronized your RAID Array will be back to normail again.

Install Grub on new hard drive MBR:

We need install grub on the MBR of the newly installed hard drive. So that in case the other drive fails the new drive will be able to boot the OS.

Enter the Grub command line:

# grub

Locate grub setup files:

Quote:grub> find /grub/stage1

On a RAID 1 with two drives present you should expect to get

Quote:(hd0,0)
(hd1,0)

Install grub on the MBR:

Quote:grub> device (hd0) /dev/sdb (or /dev/hdb for IDE drives)
grub> root (hd0,0)
grub> setup (hd0)
grub>quit

We made the second drive /dev/sdb device (hd0) because putting grub on it this way puts a bootable mbr on the 2nd drive and when the first drive is missing the second drive will boot.

This will insure that if the first drive in the Raid Array fails or has already failed that you can boot to the Operating System with the second drive.
Find all posts by this user
Quote this message in a reply
Post Reply 


Forum Jump:


User(s) browsing this thread: 1 Guest(s)