View previous topic :: View next topic |
Author |
Message |
fincoop Tux's lil' helper
Joined: 02 Feb 2004 Posts: 145
|
Posted: Thu Nov 07, 2024 2:18 pm Post subject: Growing RAID10 to larger partitions with MDADM |
|
|
Hello all,
I started down the path of consolidating 3 partitions into 1 on my 4-disk RAID10. I needed more space on the 3rd and there was free space in the other two. In any case I sequentially failed each disk, deleted partition 2, and made partition 3 bigger (now partition 2). I added each back into the RAID10 and let it sync. So now I have 4 larger partitions in the RAID10, but I can't seem to get the RAID to expand to take advantage of it. The --grow --size command doesn't seem to work, or maybe it doesn't apply to RAID10. Can anyone offer a solution? I have a backup of the data but it's a large volume and would take days to restore, would like to avoid if possible.
Code: |
>>> cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md124 : active raid10 sda2[4] sdd2[7] sdc2[6] sdb2[5]
12222011392 blocks super 1.2 128K chunks 2 far-copies [4/4] [UUUU]
>>> mdadm -D /dev/md124
/dev/md124:
Version : 1.2
Creation Time : Sat Sep 9 08:44:18 2023
Raid Level : raid10
Array Size : 12222011392 (11.38 TiB 12.52 TB)
Used Dev Size : 6111005696 (5.69 TiB 6.26 TB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Thu Nov 7 09:16:11 2024
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : far=2
Chunk Size : 128K
Consistency Policy : resync
Name : server:volume
UUID : 2540f960:17291f2e:2c5f59be:821dcffa
Events : 51671
Number Major Minor RaidDevice State
4 8 2 0 active sync /dev/sda2
5 8 18 1 active sync /dev/sdb2
6 8 34 2 active sync /dev/sdc2
7 8 50 3 active sync /dev/sdd2
>>> mdadm --grow /dev/md124 --size=max
mdadm: Cannot set device size for /dev/md124: Invalid argument
>>> gdisk -l /dev/sda
GPT fdisk (gdisk) version 1.0.10
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present
Found valid GPT with protective MBR; using GPT.
Disk /dev/sda: 15628053168 sectors, 7.3 TiB
Model: ST8000VN0022-2EL
Sector size (logical/physical): 512/4096 bytes
Disk identifier (GUID): 15124D11-1AA2-4D52-B261-54221B5278FD
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 15628053134
Partitions will be aligned on 2048-sector boundaries
Total free space is 2669 sectors (1.3 MiB)
Number Start (sector) End (sector) Size Code Name
1 2048 1258293247 600.0 GiB FD00 scratch
2 1258293248 15628052479 6.7 TiB FD00 volume
|
TIA |
|
Back to top |
|
|
fincoop Tux's lil' helper
Joined: 02 Feb 2004 Posts: 145
|
Posted: Fri Nov 08, 2024 3:01 am Post subject: |
|
|
I couldn't find a solution, so I ended up breaking one mirror out of the RAID10, creating a new array with the pair, which initialized at the larger size, then doing an rsync between the old mirror and the new one. It's not any faster than a full restore, but at least the files are online in the process. I read that mdadm will only let you fail drives that are part of the mirror. I tried another technique posted online to identify the mirror set using md5sum but that wasn't working for me.
Code: |
madam /dev/oldarray --fail /dev/sda2
madam /dev/oldarray --fail /dev/sdc2
madam /dev/oldarray --remove /dev/sda2
madam /dev/oldarray --remove /dev/sdc2
mdadm --create /dev/newarray --chunk=128 --level=10 --raid-devices=4 --layout=o2 --run /dev/sda2 missing /dev/sdc2 missing
mkfs.ext4 -E stride=32,stripe_width=64 /dev/newarray
|
|
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|