Page 1 of 1

Linux SoftRAID: Activate spare partition

Posted: Fri Nov 29, 2013 5:11 am
by peter_b
[PROBLEM]
I created a RAID1 without having the second disk connected. It worked.
Then I connected the 2nd disk, partitioned it to match the first one, and then used mdadm to add it to the RAID array:

Code: Select all

$ mdadm --add /dev/md0 /dev/sdb1
Worked.

Strangely though, I checked /proc/mdstat - and saw the 2nd disk marked as spare. No resync happening. The spare partition didn't get set active automatically :(

[SOLUTION]
Thanks to a post on superuser.com, byAvery Payne, which hinted into the right direction:
--raid-devices=2
Setting this parameter for a 2-disk RAID1 to a value of "2", causes the Kernel to realize it must activate the spare.

That did the trick.


NOTE: Here's a link to speed up softraid synchronization.

Re: Linux SoftRAID: Activate spare partition (CASE 2)

Posted: Fri Nov 29, 2013 6:39 pm
by peter_b
In order to speed up the RAID synchronization, I moved the whole array from the Raspberry Pi to another computer which faster USB (they were external drives).

I accidentially mixed up source and target drive while connecting, and after some unplug/replug chaos, I decided to start clean and only attach the source drive.

This time it was just 1 disk in the array - and it was not active, because marked as spare (S):
md127 : inactive sdc1[0](S)
524156928 blocks super 1.2

unused devices: <none>
In order to activate that array with only 1 disk (at the moment), you only need to "run" it:

Code: Select all

$ mdadm -R /dev/md127
Then /proc/mdstat shows it's active:
Personalities : [raid1]
md127 : active raid1 sdc1[0]
524156736 blocks super 1.2 [2/1] [U_]

unused devices: <none>

Linux SoftRAID: Wipe superblock

Posted: Fri Nov 29, 2013 6:58 pm
by peter_b
Since the target disk once belonged to a RAID1, Linux will automatically find the superblock and start the RAID array. Even though, the disk was never fully synchronized with the source (master).

In case you want to make sure you don't accidentially cause the RAID1 to rebuild the empty onto the clean disk, after having moved both disk to another computer, you can clean the superblock.

First, stop the automatically started array:

Code: Select all

$ mdadm -S /dev/md0
Than: Zero the superblock.
Writing zeroes over the superblock (aka "zeroing") of the target disk, destroys its memory of once belonging to a RAID array. So it can be cleanly added.

Assuming your target partition to be re-added to the array is "/dev/sda1":

Code: Select all

$ mdadm --zero-superblock /dev/sda1
If everything went ok, it will output nothing.

Re: Linux SoftRAID: Activate spare partition - revisited

Posted: Fri Apr 18, 2014 4:02 am
by peter_b
I just had a very similar case on a non-Raspberry Pi, Debian Wheezy 7 with a RAID6 on 14+1 disks:
I stopped and renamed it, but it seems I scared it, so it puked the following lines into /var/log/kern.log:
Apr 18 01:37:36 bb4 kernel: [28057.493940] md3: detected capacity change from 36005480497152 to 0
Apr 18 01:37:36 bb4 kernel: [28057.493947] md: md3 stopped.
Apr 18 01:37:36 bb4 kernel: [28057.493955] md: unbind<sdae1>
Apr 18 01:37:36 bb4 kernel: [28057.576654] md: export_rdev(sdae1)
Apr 18 01:37:36 bb4 kernel: [28057.576703] md: unbind<sdas1>
Apr 18 01:37:36 bb4 kernel: [28057.640601] md: export_rdev(sdas1)
Apr 18 01:37:36 bb4 kernel: [28057.640647] md: unbind<sdar1>
Apr 18 01:37:37 bb4 kernel: [28057.684594] md: export_rdev(sdar1)
Apr 18 01:37:37 bb4 kernel: [28057.684639] md: unbind<sdaq1>
Apr 18 01:37:37 bb4 kernel: [28057.684703] md: export_rdev(sdaq1)
Apr 18 01:37:37 bb4 kernel: [28057.684746] md: unbind<sdap1>
Apr 18 01:37:37 bb4 kernel: [28057.692577] md: export_rdev(sdap1)
Apr 18 01:37:37 bb4 kernel: [28057.692625] md: unbind<sdao1>
Apr 18 01:37:37 bb4 kernel: [28057.692688] md: export_rdev(sdao1)
Apr 18 01:37:37 bb4 kernel: [28057.692732] md: unbind<sdan1>
Apr 18 01:37:37 bb4 kernel: [28057.696589] md: export_rdev(sdan1)
Apr 18 01:37:37 bb4 kernel: [28057.696632] md: unbind<sdam1>
Apr 18 01:37:37 bb4 kernel: [28057.700634] md: export_rdev(sdam1)
Apr 18 01:37:37 bb4 kernel: [28057.700674] md: unbind<sdal1>
Apr 18 01:37:37 bb4 kernel: [28057.700738] md: export_rdev(sdal1)
Apr 18 01:37:37 bb4 kernel: [28057.700782] md: unbind<sdak1>
Apr 18 01:37:37 bb4 kernel: [28057.708561] md: export_rdev(sdak1)
Apr 18 01:37:37 bb4 kernel: [28057.708605] md: unbind<sdaj1>
Apr 18 01:37:37 bb4 kernel: [28057.708668] md: export_rdev(sdaj1)
Apr 18 01:37:37 bb4 kernel: [28057.708711] md: unbind<sdai1>
Apr 18 01:37:37 bb4 kernel: [28057.716554] md: export_rdev(sdai1)
Apr 18 01:37:37 bb4 kernel: [28057.716598] md: unbind<sdah1>
Apr 18 01:37:37 bb4 kernel: [28057.716661] md: export_rdev(sdah1)
Apr 18 01:37:37 bb4 kernel: [28057.716704] md: unbind<sdag1>
Apr 18 01:37:37 bb4 kernel: [28057.720565] md: export_rdev(sdag1)
Apr 18 01:37:37 bb4 kernel: [28057.720604] md: unbind<sdaf1>
Apr 18 01:37:37 bb4 kernel: [28057.724523] md: export_rdev(sdaf1)
Apr 18 01:37:37 bb4 kernel: [28057.740483] md: md3 stopped.
Apr 18 01:37:37 bb4 kernel: [28057.758880] md: bind<sdae1>
Apr 18 01:37:37 bb4 kernel: [28057.764999] md/raid:md3: device sdae1 operational as raid disk 0
Apr 18 01:37:37 bb4 kernel: [28057.766104] md/raid:md3: allocated 14800kB
Apr 18 01:37:37 bb4 kernel: [28057.766170] md/raid:md3: not enough operational devices (13/14 failed)
Apr 18 01:37:37 bb4 kernel: [28057.766201] RAID conf printout:
Apr 18 01:37:37 bb4 kernel: [28057.766204] --- level:6 rd:14 wd:1
Apr 18 01:37:37 bb4 kernel: [28057.766208] disk 0, o:1, dev:sdae1
Apr 18 01:37:37 bb4 kernel: [28057.766849] md/raid:md3: failed to run raid set.
Apr 18 01:37:37 bb4 kernel: [28057.766859] md: pers->run() failed ...
Ouch.

I read my previous posting, but adding "--raid-devices=xx" didn't work :(
So I went back to Avery's posting and then tried to following:

Code: Select all

$ sudo mdadm --assemble --force --no-degraded /dev/md3 /dev/disk/by-path/pci-0000\:06\:04.0-scsi-*-part1
It worked! md3 was now still in "auto-read-only" mode, but that's easy to fix:

Code: Select all

$ sudo mdadm --readwrite /dev/md3
Done.