View previous topic :: View next topic |
Author |
Message |
don quixada l33t
Joined: 15 May 2003 Posts: 810
|
Posted: Sun Sep 01, 2019 1:05 am Post subject: mdadm RAID1 and changing hard drives to larger ones |
|
|
Hi folks, I'm running into some trouble with trying to upgrade my hard drives in my RAID 1 array. I'm using this guide but I'm having trouble with the steps.
I think the trouble lies with the fact that I installed an old NAS drive (WD Red) and want to replace the current 2T drives in my machine with these 4T NAS drives. But when I put the first drive in it was "busy" from the get-go and I couldn't even open sfdisk without using '--force' because it said it was busy and I needed to unmount the partitions first (even though they weren't mounted). So I'm not sure where to start. I still re-partitioned the NAS drive despite the warning message but it didn't seem to help because when I ran the mdadm --add command this happened:
Code: | # mdadm --manage /dev/md2 --add /dev/sdd2
mdadm: Cannot open /dev/sdd2: Device or resource busy
|
Here is the output from /proc/mdstat:
Code: | # cat /proc/mdstat
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md122 : inactive sdd1[0](S)
530108 blocks super 1.0
md123 : inactive sdd3[0](S)
3897063620 blocks super 1.0
md124 : inactive sdd5[0](S)
8353780 blocks super 1.0
md125 : inactive sdd4[0](S)
530128 blocks super 1.0
md4 : active raid1 sdb4[1] sdc4[0]
378534776 blocks super 1.2 [2/2] [UU]
md126 : inactive sdd2[0](S)
530124 blocks super 1.0
md127 : active (auto-read-only) raid1 sdb1[1] sdc1[0]
1048564 blocks super 1.2 [2/2] [UU]
md2 : active raid1 sdb2[2] sdc2[0]
524286840 blocks super 1.2 [2/1] [U_]
[>....................] recovery = 0.6% (3236800/524286840) finish=50.9min speed=170361K/sec
md3 : active raid1 sdb3[1] sdc3[0]
1048574840 blocks super 1.2 [2/2] [UU]
unused devices: <none> |
*note that I had to re-add the old sdb2 partition after failing it so that's why it is re-building.
Any help would be appreciated. Thanks!
dq |
|
Back to top |
|
|
don quixada l33t
Joined: 15 May 2003 Posts: 810
|
Posted: Wed Sep 04, 2019 2:44 am Post subject: |
|
|
Ok so, I'm not sure why, but while I was waiting for anyone to reply to my original message, the problem seemed to have gone away.
I was checking to see the status of the sync:
Code: | # cat /proc/mdstat
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md127 : active (auto-read-only) raid1 sdc1[0] sdb1[1]
1048564 blocks super 1.2 [2/2] [UU]
md3 : active raid1 sdb3[1] sdc3[0]
1048574840 blocks super 1.2 [2/2] [UU]
md4 : active raid1 sdb4[1] sdc4[0]
378534776 blocks super 1.2 [2/2] [UU]
md2 : active raid1 sdd2[3](S) sdb2[2] sdc2[0]
524286840 blocks super 1.2 [2/2] [UU] |
And I saw that the output was quite different, so I tried re-adding the partition from the NAS drive and it worked! I guess it was no longer busy somehow...
Code: | # mdadm --manage /dev/md2 --add /dev/sdd2
mdadm: added /dev/sdd2
# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md127 : active (auto-read-only) raid1 sdc1[0] sdb1[1]
1048564 blocks super 1.2 [2/2] [UU]
md3 : active raid1 sdb3[1] sdc3[0]
1048574840 blocks super 1.2 [2/2] [UU]
md4 : active raid1 sdb4[1] sdc4[0]
378534776 blocks super 1.2 [2/2] [UU]
md2 : active raid1 sdd2[3](S) sdb2[2] sdc2[0]
524286840 blocks super 1.2 [2/2] [UU]
unused devices: <none> |
Then after failing the original partition it started syncing to the new one:
Code: | # cat /proc/mdstat
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md127 : active (auto-read-only) raid1 sdc1[0] sdb1[1]
1048564 blocks super 1.2 [2/2] [UU]
md3 : active raid1 sdb3[1] sdc3[0]
1048574840 blocks super 1.2 [2/2] [UU]
md4 : active raid1 sdb4[1] sdc4[0]
378534776 blocks super 1.2 [2/2] [UU]
md2 : active raid1 sdd2[3] sdb2[2] sdc2[0](F)
524286840 blocks super 1.2 [2/1] [_U]
[>....................] recovery = 0.0% (407744/524286840) finish=64.2min speed=135914K/sec
unused devices: <none>
# mdadm --manage /dev/md2 --remove /dev/sdc2
mdadm: hot removed /dev/sdc2 from /dev/md2 |
I'll do this with the other partitions but does anyone have any idea of why it didn't work in the first place?
dq |
|
Back to top |
|
|
molletts Tux's lil' helper
Joined: 16 Feb 2013 Posts: 131
|
Posted: Wed Sep 04, 2019 10:34 am Post subject: |
|
|
don quixada wrote: | I'll do this with the other partitions but does anyone have any idea of why it didn't work in the first place? |
Only guessing but it looks from the original mdstat like the drive already had some partitions on it that mdraid thought were members of some other arrays, so it had set up arrays for them which were waiting for their other members to show up.
Why it blocked at first then spontaneously resolved itself, though, I have no idea. |
|
Back to top |
|
|
don quixada l33t
Joined: 15 May 2003 Posts: 810
|
Posted: Wed Sep 04, 2019 4:57 pm Post subject: |
|
|
Maybe there was some timeout or something. I'll try to blow away the old partitions before installing the next drive into this machine so see if it makes any difference...
dq |
|
Back to top |
|
|
don quixada l33t
Joined: 15 May 2003 Posts: 810
|
Posted: Thu Sep 05, 2019 3:14 am Post subject: |
|
|
Update: yes I treated the NAS drive before installing it and it worked like a charm! Funny glitch. Not sure how to prove it for a bug report though...
dq |
|
Back to top |
|
|
molletts Tux's lil' helper
Joined: 16 Feb 2013 Posts: 131
|
Posted: Thu Sep 05, 2019 10:28 am Post subject: |
|
|
don quixada wrote: | Update: yes I treated the NAS drive before installing it and it worked like a charm! Funny glitch. Not sure how to prove it for a bug report though...dq |
I suspect a bug report would probably be met with, "It's working as designed." If you were adding in drives containing an already-existing array, you'd probably expect that it would automatically pick up the array components and assemble the array. I guess it also provides a little bit of protection against accidentally blowing away the contents of a drive if you get array members mixed up between systems. |
|
Back to top |
|
|
NeddySeagoon Administrator
Joined: 05 Jul 2003 Posts: 54795 Location: 56N 3W
|
Posted: Thu Sep 05, 2019 10:35 am Post subject: |
|
|
don quixada,
Add the new partition the the existing raid with the replace option to mdadm.
This will build the partition to be replaced onto the new partition using all available drives in the existing raid set, then drop the old raid element as failed.
Rinse and repeat for all old partitions/drives.
I'll leave you to read up on the syntax. _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
|
|