Encountered a drive failure in a server the other day and proceeded to the process of swapping it. The server was set up with a software raid and going by how all the servers I work on it should be something along the lines of:
Personalities : [raid1]
md0 : active raid1 sdb1 sda1
104320 blocks [2/2] [UU]
md2 : active raid1 sdb5 sda5
226532416 blocks [2/2] [UU]
md1 : active raid1 sdb2 sda2
10241344 blocks [2/2] [UU]
unused devices: <none>
Unfortunately the server kept constantly throwing the following error on the screen:
iscsi 0:0:1:0: rejecting I/O to dead device
which would not allow me to log in and fail the device if needed and remove it form the mdadm.
Shutting it down and swapping the drive should work so proceeded to do so — outcome kernel panic.
Had no idea what caused it, tried single user mode – no luck. Figured it might be due to it trying to look for the partitions on the new hdd to add it to mdadm configuration to boot from it however I was sceptical about that.
Next step boot in recovery and try to access mdadm, of I place the recovery cd in the drive and boot.
Ok I am still pretty new to this but having my hopes high. Booted from cd:
Set of standard screens to set up keyboard layout and locale, finding root file system — not found.
press any key to go to bash
I’m here, what now? Try mount sda2 (which I knew was root from the standard setup – no luck. Ha knew it wont be this easy, it is a Linux Software Raid after all. I have to mount the md1 device then, now the question it -how?
After a consultation with Google 😀 I have the steps
First I have edited/created the /etc/mdadm.conf
And added the devices that were used to crate the md devices:
DEVICE /dev/sd[ab]1 DEVICE /dev/sd[ab]2 DEVICE /dev/sd[ab]5
this adds entries for any RAID devices it finds on the disk specified.
This will assemble the raid devices to be used.
In my case the / partition was on md1 as suspected however was set up as a RAID 0 and it ended being rebuilt from scratch
Some good info on the way though 😀