View previous topic :: View next topic |
Author |
Message |
ExecutorElassus Veteran
Joined: 11 Mar 2004 Posts: 1451 Location: Berlin, Germany
|
Posted: Sat Feb 13, 2021 5:02 pm Post subject: extending LVs on a RAID array [SOLVED] |
|
|
I have three HDDs in my machine as a RAID6 array (ie, the actual size is two of them total, with the third for parity-checking). I have just replaced the last of the old drives with a larger one: previously, the array was 3x 1TB, and it is now 3x 2TB. I have synced all the partitions.
/proc/mdstat shows the following:
Code: | # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4]
md127 : active raid5 sdc4[4] sda4[5] sdb4[3]
1931840512 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
md126 : active raid1 sdc3[0] sda3[2] sdb3[1]
9765504 blocks [3/3] [UUU]
md1 : active raid1 sdc1[0] sda1[2] sdb1[1]
97536 blocks [3/3] [UUU]
unused devices: <none>
|
However, I do not know how to extend the existing partitions to use the extra space. 'pvdisplay' shows the following:
Code: | # pvdisplay
--- Physical volume ---
PV Name /dev/md127
VG Name vg
PV Size <1,80 TiB / not usable 3,00 MiB
Allocatable yes
PE Size 4,00 MiB
Total PE 471640
Free PE 4440
Allocated PE 467200
PV UUID P1IbQY-JpO7-uBWA-5Jyr-hnRj-jB9S-LbIdsZ
|
and
Code: | # pvscan
PV /dev/md127 VG vg lvm2 [<1,80 TiB / 17,34 GiB free]
Total: 1 [<1,80 TiB] / in use: 1 [<1,80 TiB] / in no VG: 0 [0 ]
|
How do I extend /dev/md127 to use the full 4TB of space it should have available? the partition(s) on which /dev/md127 resides are all the same, as follows:
Code: | # fdisk /dev/sda
Welcome to fdisk (util-linux 2.36).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): p
Disk /dev/sda: 1,82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: WDC WD20EZAZ-00G
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x99776392
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 206847 204800 100M fd Linux raid autodetect
/dev/sda2 206848 2303999 2097152 1G 82 Linux swap / Solaris
/dev/sda3 2304000 23275519 20971520 10G fd Linux raid autodetect
/dev/sda4 23275520 3907029167 3883753648 1,8T fd Linux raid autodetect
|
Here is 'mdadm /dev/md127':
Code: | # mdadm --detail /dev/md127
/dev/md127:
Version : 1.2
Creation Time : Wed Apr 11 02:10:50 2012
Raid Level : raid5
Array Size : 1931840512 (1842.35 GiB 1978.20 GB)
Used Dev Size : 965920256 (921.17 GiB 989.10 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Sat Feb 13 18:16:24 2021
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : resync
Name : domo-kun:carrier (local to host domo-kun)
UUID : d42e5336:b75b0144:a502f2a0:178afc11
Events : 1968875
Number Major Minor RaidDevice State
4 8 36 0 active sync /dev/sdc4
3 8 20 1 active sync /dev/sdb4
5 8 4 2 active sync /dev/sda4
|
Do I need to do something to extend /dev/md127 without booting the system (ie, with a liveCD), or can I do this on a running system?
Cheers,
EE
Last edited by ExecutorElassus on Sun Feb 14, 2021 8:09 am; edited 1 time in total |
|
Back to top |
|
|
alamahant Advocate
Joined: 23 Mar 2019 Posts: 3918
|
Posted: Sat Feb 13, 2021 5:39 pm Post subject: |
|
|
The way to extend a vg
is
Code: |
###after having created a new pv
vgextend <vg-name> </new/pv/name>
|
You dont need to switch off your machine for that.However this dowsnt extend the size of lvs.To do that after extending the vg you have to run
Code: |
lvresize --resizefs -l+100%free /dev/<vg-name>/<lv-name> ###to assign all available space to the filesystem OR
lvresize --resizefs -L <desired-size>G /dev/<vg-name>/<lv-name> ### To assign specific size
|
I am not familiar with raid but this is the way we do to plain lvm
You most probably can do this online ie without unmounting the partition.
If not you will get a complain and in that case use maybe a live cd. _________________
|
|
Back to top |
|
|
ExecutorElassus Veteran
Joined: 11 Mar 2004 Posts: 1451 Location: Berlin, Germany
|
Posted: Sat Feb 13, 2021 5:47 pm Post subject: |
|
|
I'm not sure what you mean by creating a new pv. /dev/md127 resides on a RAID array on three partitions, using two for storage. I replaced those 1TB partitions with 2TB partitions (ie, added 1TB to both), but the pv is still showing as only using half of them. How do I expand /dev/md127?
Cheers,
EE |
|
Back to top |
|
|
NeddySeagoon Administrator
Joined: 05 Jul 2003 Posts: 54578 Location: 56N 3W
|
Posted: Sat Feb 13, 2021 5:55 pm Post subject: |
|
|
ExecutorElassus,
See Code: | man pvresize
...
EXAMPLES
Expand a PV after enlarging the partition.
pvresize /dev/sda1
|
-- edit --
Hmm maybe you need first. _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
|
ExecutorElassus Veteran
Joined: 11 Mar 2004 Posts: 1451 Location: Berlin, Germany
|
Posted: Sat Feb 13, 2021 6:05 pm Post subject: |
|
|
Hi Neddy!
pvresize didn't extend /dev/md127, so maybe I do need 'mdadm --grow'. I'll look into that. Can you use that command on an active array?
Cheers,
EE |
|
Back to top |
|
|
NeddySeagoon Administrator
Joined: 05 Jul 2003 Posts: 54578 Location: 56N 3W
|
Posted: Sat Feb 13, 2021 6:29 pm Post subject: |
|
|
ExecutorElassus,
The man page will say more.
It probably only changes the raid metadata so I would expect so.
You may need a reboot for the kernel to see the bigger raid set though. _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
|
ExecutorElassus Veteran
Joined: 11 Mar 2004 Posts: 1451 Location: Berlin, Germany
|
Posted: Sat Feb 13, 2021 6:52 pm Post subject: |
|
|
the manpage says it can be used on an active array. I gave the command, and now it's resyncing to a new size of 3.8TB.
So that worked, hurrah!
Once that's done I assume I can use lvextend to grow any of the specific logical volumes that reside on it.
Many thanks for the help! I'll report back if I run into issues.
EE |
|
Back to top |
|
|
NeddySeagoon Administrator
Joined: 05 Jul 2003 Posts: 54578 Location: 56N 3W
|
Posted: Sat Feb 13, 2021 7:02 pm Post subject: |
|
|
ExecutorElassus,
You have three layers of metadata.
The partition table, which is already correct.
The raid metadata ... which is fixed and mdadm is fixing the redundancy in the new space.
The Physical Volume metadata, That is yet to be fixed.
After a pvresize, your will have free lvm extents, which you can allocate in the normal ways.
Either to existing logical volumes or to new logical volumes. _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
|
ExecutorElassus Veteran
Joined: 11 Mar 2004 Posts: 1451 Location: Berlin, Germany
|
Posted: Sun Feb 14, 2021 8:08 am Post subject: |
|
|
Hi Neddy,
the RAID finished resizing last night. This morning I ran pvresize to extend to the new storage space, then lvextend on the VGs to extend them, and am now running resize2fs to extend the filesystems to use them. I now have much larger partitions (most notably /usr, which finally allows me to update to the new profile that requires a move to the new /usr scheme for which I previously had insufficient space).
Thanks for the help!
EE |
|
Back to top |
|
|
NeddySeagoon Administrator
Joined: 05 Jul 2003 Posts: 54578 Location: 56N 3W
|
Posted: Sun Feb 14, 2021 1:32 pm Post subject: |
|
|
ExecutorElassus,
The raid resize is the raid metadata update.
The bit that takes the time is the syncing the new empty space. You can use the raid while that is in progress.
It will appear slow, due to all the head movements on rotating rust but it still safe. _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
|
Moriah Advocate
Joined: 27 Mar 2004 Posts: 2381 Location: Kentucky
|
Posted: Thu Jun 27, 2024 12:37 am Post subject: |
|
|
Its 3 years since this thread was active, but I am doing the same thing: I have a RAID-1 3-way mirror that used tto have 3 12 TB drives and now has 3 16 TB drives, so I grew the raid, which looks like it worked, but since the drive is running LUKS on the entire array, then LVM, then a big XFS filesystem, I am not able to get past the grow step:
Code: |
baruch ~ # fdisk -l
Disk /dev/sda: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: ST2000DM008-2FR1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0xcb2f3de4
Device Boot Start End Sectors Size Id Type
/dev/sda1 63 112454 112392 54.9M 83 Linux
/dev/sda2 112455 2930272064 2930159610 1.4T 83 Linux
Partition 1 does not start on physical sector boundary.
Partition 2 does not start on physical sector boundary.
Disk /dev/mapper/gentoo-rootfs: 100 GiB, 107374182400 bytes, 209715200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Alignment offset: 512 bytes
Disk /dev/md0: 14.55 TiB, 16000899612672 bytes, 31251757056 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/mapper/cryptoraid: 10.91 TiB, 12000135479296 bytes, 23437764608 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/sdb: 14.55 TiB, 16000900661248 bytes, 31251759104 sectors
Disk model: ST16000NT001-3LV
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/sdc: 14.55 TiB, 16000900661248 bytes, 31251759104 sectors
Disk model: ST16000NT001-3LV
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/sdd: 14.55 TiB, 16000900661248 bytes, 31251759104 sectors
Disk model: ST16000NE000-2RW
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/mapper/cryptoraid_vg_3TB-cryptoraid_bu: 10.91 TiB, 12000134430720 bytes, 23437762560 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
baruch ~ # pvdisplay
--- Physical volume ---
PV Name /dev/sda2
VG Name gentoo
PV Size 1.36 TiB / not usable <2.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 357685
Free PE 332085
Allocated PE 25600
PV UUID nGzdke-oQKB-bzNw-s1xW-Xnd7-s2EJ-QgMfAC
--- Physical volume ---
PV Name /dev/mapper/cryptoraid
VG Name cryptoraid_vg_3TB
PV Size 10.91 TiB / not usable 1.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 2861055
Free PE 0
Allocated PE 2861055
PV UUID eKJdAi-5qdM-H6bp-U5Rk-fcMy-Z0gq-XOjX4x
baruch ~ # vgdisplay
--- Volume group ---
VG Name gentoo
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 11082
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 1.36 TiB
PE Size 4.00 MiB
Total PE 357685
Alloc PE / Size 25600 / 100.00 GiB
Free PE / Size 332085 / <1.27 TiB
VG UUID oRSx5O-ny9w-R9UQ-iZEC-QTEX-wQ2l-Ci0IkY
--- Volume group ---
VG Name cryptoraid_vg_3TB
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 17
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 10.91 TiB
PE Size 4.00 MiB
Total PE 2861055
Alloc PE / Size 2861055 / 10.91 TiB
Free PE / Size 0 / 0
VG UUID 147evb-72AE-dTmb-3Ajz-cH8l-LGKH-p8KK67
baruch ~ # lvdisplay
--- Logical volume ---
LV Path /dev/gentoo/rootfs
LV Name rootfs
VG Name gentoo
LV UUID CVFfds-7wy4-9ewz-ePIG-m3vP-K8C3-HbgdTp
LV Write Access read/write
LV Creation host, time ,
LV Status available
# open 1
LV Size 100.00 GiB
Current LE 25600
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:0
--- Logical volume ---
LV Path /dev/cryptoraid_vg_3TB/cryptoraid_bu
LV Name cryptoraid_bu
VG Name cryptoraid_vg_3TB
LV UUID 5049Q9-Pbab-07jS-30aa-cnly-ea2f-iHTzNe
LV Write Access read/write
LV Creation host, time ,
LV Status available
# open 1
LV Size 10.91 TiB
Current LE 2861055
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:2
baruch ~ # df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/gentoo/rootfs 104806400 69998616 34807784 67% /
tmpfs 1632172 576 1631596 1% /run
dev 10240 0 10240 0% /dev
shm 4080424 0 4080424 0% /dev/shm
/dev/mapper/cryptoraid_vg_3TB-cryptoraid_bu 11717959680 10800954944 917004736 93% /bu
baruch ~ #
|
AsS you can see, the underlying /dev/md0 RAID did indeed grow to 14.55 TiB, but /dev/mapper/cryptoraid, which is the LUKS decrypted view of that array, did *NOT* grow accordingly. Do I have to deactivate the volume groups, close the LUKS on the drive, then re-open the LUKS decryption to get the /dev/mapper/cryptoraid to see the bigger underlying /dev/md0 ?
I could just reboot and let everything start up again from ground zero, but I am working this remotely, and my kvm-over-ip setup is not working properly now, and neither is my remote controlled power switch. If I issued a reboot from the console, which is logged in over ssh, then I would loose the ssh connection. This means I would have to wait a few minutes, then attempt to login again over ssh. If something went wrong, I could not see the boot screen, and I would be stuck until friday afernoon, which is when I can next be physically present in front of the machine.
So I guess I will just have to wait until Frtiday afternoon, when I can reboot with confidence. This machione is my backup server, and it runs every night after midnight, so I don't want to take it down overnight. _________________ The MyWord KJV Bible tool is at http://www.elilabs.com/~myword
Foghorn Leghorn is a Warner Bros. cartoon character. |
|
Back to top |
|
|
Hu Administrator
Joined: 06 Mar 2007 Posts: 22673
|
Posted: Thu Jun 27, 2024 2:12 am Post subject: |
|
|
Did you try cryptsetup resize? It looks like it should just detect the device size if you do not specify it. |
|
Back to top |
|
|
Moriah Advocate
Joined: 27 Mar 2004 Posts: 2381 Location: Kentucky
|
Posted: Sat Jun 29, 2024 1:20 am Post subject: |
|
|
cryptsetup resize worked. Thanks for the suggestion. I did not have to reboot. In fact, I did not even have to dismount the filesystem.
After the cryptsetup resize, I did a pvextend, followed by an lvextend, followed by a xfs_growfs, and now I have an addditional 4 TB available for use. _________________ The MyWord KJV Bible tool is at http://www.elilabs.com/~myword
Foghorn Leghorn is a Warner Bros. cartoon character. |
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|