Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
[SOLVED] gentoo with root partition on LVM
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Installing Gentoo
View previous topic :: View next topic  
Author Message
Vieri
l33t
l33t


Joined: 18 Dec 2005
Posts: 886

PostPosted: Fri Jan 15, 2021 11:57 am    Post subject: [SOLVED] gentoo with root partition on LVM Reply with quote

Hi,

I followed the Gentoo LVM Wiki, but I guess I've missed something because I can't mount the root partition when booting.

Some partitions are in MDADM RAID1 and the ROOT partition is in LVM mirror RAID1.

These are some of my relevant steps from livecd:

Code:
genkernel --install --mdadm --mdadm-config=/etc/mdadm.conf --lvm initramfs


I install grub with a line like this one:

Code:
linux /${K} root=/dev/ram0 init=/linuxrc ramdisk=8192 rootfstype=ext4 vga=0 domdadm dolvm real_root=/dev/vgroot/root


The installed system boots, but it fails to find /dev/vgroot/root, so I enter the busybox shell.

I see a message that says that I can copy /run/..something to a USB pendrive, but oddly enough I can't seem to mount the device.
Both lsusb and dmesg show that the device is in /dev/sdc1, but mount/dev/sdc1 /mnt/pendrive says "no such file or directory". Both exist so I don't know what's up.

Anyway, the following commands show what I'm expecting:

Code:
lvm lvdisplay
lvm pvs


Specifically, I see /dev/vgroot/root, two mirrored volumes, etc., except for one suspicious line:

Code:
LV Status  Not available


In fact, if I reboot in the livecd again, I can see that the status is "available":

Code:
# lvdisplay
  WARNING: Failed to connect to lvmetad. Falling back to device scanning.
  --- Logical volume ---
  LV Path                /dev/vgroot/root
  LV Name                root
  VG Name                vgroot
  LV UUID                qgWICf-J4s1-LozA-RnUL-pMBy-iumd-B1daVw
  LV Write Access        read/write
  LV Creation host, time livecd, 2021-01-08 14:20:45 +0000
  LV Status              available
  # open                 0
  LV Size                187.12 GiB
  Current LE             47904
  Mirrored volumes       2
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:4


So it seems that I've done something wrong in the initramdisk, right?

What can I try?


Last edited by Vieri on Thu Jan 21, 2021 1:48 pm; edited 1 time in total
Back to top
View user's profile Send private message
alamahant
Advocate
Advocate


Joined: 23 Mar 2019
Posts: 3879

PostPosted: Fri Jan 15, 2021 12:13 pm    Post subject: Reply with quote

Hi
Did you install lvm in the chroot?
The live cd has it installed
but maybe its missing in the chroot.
Also you have to enable lvm@boot and lvmetad@default.
Also I much prefer dracut to genkernel.
Its easier for beginners.
_________________
:)
Back to top
View user's profile Send private message
Vieri
l33t
l33t


Joined: 18 Dec 2005
Posts: 886

PostPosted: Fri Jan 15, 2021 12:27 pm    Post subject: Reply with quote

Hi,

Yes, I emerged lvm2 in the jailed root environment. I also added LVM to the boot runlevel. I did not add lvmetad.
However, the problem is occurring in the initramfs, and the LVM volume information seems to be there.
It's just that the "status" is "unavailable". It may be simply because the initramdisk (genkernel) does not "activate" something at boot time.

Never tried dracut.
I guess I'll have to try.

Thanks
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54300
Location: 56N 3W

PostPosted: Fri Jan 15, 2021 12:28 pm    Post subject: Reply with quote

Vieri,

A few things.

There are two ways to do root in a LV on top of raid.
The one I use is to donate the block devices to an mdadm raid set, then donate the raid set to a PV.
This way the underlying raid is not visible to LVM.

The other way in to have LVM do it all. You appear to be doing that as you get
Code:
# lvdisplay
...
  Mirrored volumes       2
...


The kernel parameters
Code:
root=/dev/ram0 init=/linuxrc ramdisk=8192
have been obsolete for a long time.
The initramfs is no longer a ramdrive and the default init= just works.

When you poke about in the initrd shell, you say everything you expect to see is there.
Can you mount /dev/vgroot/root by hand ?
If not, what is the error?
Is there anything in dmesg?

Busybox does not have less. Use
Code:
dmesg | more
as a pager.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
Vieri
l33t
l33t


Joined: 18 Dec 2005
Posts: 886

PostPosted: Fri Jan 15, 2021 1:32 pm    Post subject: Reply with quote

No, /dev/vgroot/root is nowhere to be seen.
However lvm lvdisplay DOES show the "right info" about /dev/vgroot/root except that the LV status is "not available".
I will try to run vgscan and vgchange -a y within that SHELL.

Regarding the grub line, I understand that this would be enough?
Code:
linux /${K} root=/dev/ram0 rootfstype=ext4 vga=0 domdadm dolvm real_root=/dev/vgroot/root
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54300
Location: 56N 3W

PostPosted: Fri Jan 15, 2021 1:57 pm    Post subject: Reply with quote

Vieri,

Code:
vgchange -ay
should be enough.
In my install, /dev/vg/ are all symlinks. You need something to create symlinks.

What does
Code:
ls /dev/dm-*
tell and/or
Code:
ls -l /dev/mapper/*

You are looking for real block device files.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
Vieri
l33t
l33t


Joined: 18 Dec 2005
Posts: 886

PostPosted: Fri Jan 15, 2021 2:14 pm    Post subject: Reply with quote

If I enter the SHELL when the boot process complains that the root LVM partition is not there, I can run these commands:

Code:
lvm vgscan
lvm vgchange -a y


and that does it. The root partition /dev/vgroot/root is there as well as all the /dev/mapper/* devices.
If I "exit" the SHELL the boot process resumes perfectly, and I can log into my system.

So there's obviously something wrong with the way I created my initramdisk with genkernel.
Back to top
View user's profile Send private message
Vieri
l33t
l33t


Joined: 18 Dec 2005
Posts: 886

PostPosted: Fri Jan 15, 2021 2:19 pm    Post subject: Reply with quote

NeddySeagoon wrote:

What does
Code:
ls /dev/dm-*
tell


Does not exist.

NeddySeagoon wrote:

and/or
Code:
ls -l /dev/mapper/*



Just /dev/mapper/control.

As reported in my previous comment, I solve the issue by running vgchange -ay.
However, now I need to understand why genkernel did not generate a "proper" initramdisk.
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54300
Location: 56N 3W

PostPosted: Fri Jan 15, 2021 3:06 pm    Post subject: Reply with quote

Vieri,

I've not used genkernel, so I can't help further.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
alamahant
Advocate
Advocate


Joined: 23 Mar 2019
Posts: 3879

PostPosted: Fri Jan 15, 2021 3:46 pm    Post subject: Reply with quote

In the beginning genkernel used to wrork for me for booting root on lvm.
At some point it stopped.
Then I switched to dracut.
And have never looked back since.
_________________
:)
Back to top
View user's profile Send private message
gentoo_ram
Guru
Guru


Joined: 25 Oct 2007
Posts: 475
Location: San Diego, California USA

PostPosted: Sun Jan 17, 2021 1:00 am    Post subject: Reply with quote

I'm using root on LVM with an initramfs created by genkernel.

Make sure you have LVM="yes" in your /etc/genkernel.conf
I also have these set: E2FSPROGS, BUSYBOX, LUKS, MDADM, DISKLABEL

For my boot string I have:
linux /vmlinuz-5.10.2-gentoo root=/dev/mapper/vg0-root ro rootfstype=ext4 dolvm
echo 'Loading initial ramdisk ...'
initrd /intel-uc.img /initramfs-5.10.2-gentoo.img

The intel-uc.img is for Intel firmware updates. You may or may not need those.

Then 'genkernel initramfs' and 'grub-mkconfig -o /boot/grub/grub.cfg'.
Back to top
View user's profile Send private message
skellr
l33t
l33t


Joined: 18 Jun 2005
Posts: 976
Location: The Village, Portmeirion

PostPosted: Sun Jan 17, 2021 4:13 pm    Post subject: Reply with quote

Yeah, Genkernel initramfs is _deceptively_ easy to use. You don't need to add extra kernel parameters related to the initrd llike real_root=, root=/dev/ram0, or init= as it will break things. just a simple ( root=/dev/vgroot/root dolvm ) should work.
Back to top
View user's profile Send private message
Vieri
l33t
l33t


Joined: 18 Dec 2005
Posts: 886

PostPosted: Mon Jan 18, 2021 2:01 pm    Post subject: Reply with quote

OK, thanks, everyone.

I do believe however that there seems to be a bug in the way the init script generated by genkernel actually initializes the volume groups, or maybe it's a kernel "thing". I don't know, but here's what I've observed.

I've booted the kernel from the grub entry that grub-install makes out of 10_linux. All boots fine untl the initramdisk tries to mount the root partition (LVM). It complains that /dev/mapper/vgroot-root does not exist.
Now, if I enter the SHELL manually, I can see that /dev/mapper/ is empty (except for "control") and /dev/vgroot is non-existent. I also see from dmesg that before trying to mount /dev/mapper/vgroot-root the boot process actually did try to initialize LVM volume groups. I see the following:

Code:
Executed: 'lvm vgscan'
Executed: 'udevadm settle --timeout=120'


and then:

Code:
Executed: 'lvm vgchange -ay --sysinit'
Executed: 'udevadm settle --timeout=120'
Executed: 'mkdir -p /newroot'


Finally:

Code:
[OK] Determining root device (trying /dev/mapper/vgroot-root) ...
[!!] Block device /dev/mapper/vgroot-root is not a valid root device
[!!] Could not find the root block device in /dev/mapper/vgroot-root


However, within the "rescueshell" I can manually run:

Code:
# lvm vgchange -ay --sysinit
  1 logical volume(s) in volume group 'vgroot' now active


Now, both /dev/mapper/ and /dev/vgroot/ contain the devices I'm expecting.

Simply exiting the "rescueshell"resumes the boot process, and my root partition mounts fine bringing me to a login prompt.

So, I guess something screws up at boot time. It might be a timeout, the boot process is too fast, I don't know...
If no one has seen this before I guess I'll have to open a bug report.

Thanks
Back to top
View user's profile Send private message
skellr
l33t
l33t


Joined: 18 Jun 2005
Posts: 976
Location: The Village, Portmeirion

PostPosted: Mon Jan 18, 2021 2:23 pm    Post subject: Reply with quote

Maybe rootdelay would help with that.
Code:
rootdelay[=<...>], rootwait[=<...>]
           Pauses for up to 3 seconds (or specified number of seconds) while waiting for root
           device to appear during initramfs root scanning.

Good luck.
Back to top
View user's profile Send private message
Vieri
l33t
l33t


Joined: 18 Dec 2005
Posts: 886

PostPosted: Tue Jan 19, 2021 10:09 am    Post subject: Reply with quote

Thanks for the suggestion, but I don't think this will make any difference at all (I'll try it as soon as I get the chance though).

The reason I'm saying this is that the init script that Genkernel generates already does a while loop to wait for the root device, and it seems to me that aprox. 3 seconds should be more than enough.
In any case, even if the while loop breaks and I enter the "rescueshell" manually, a whole lot of time has already passed until I manually inspect /dev/mapper and before I run vgchange -ay again... So no matter how big the delay, the root partition will never show up unless vgchange is run again.

So I'm wondering why the first run within Genkernel's init script failed to bring ROOT up.
Unfortunately, I didn't take a peak at /run/initramfs/init.log when I was in the "rescueshell", and I won't be able to do so for the next two days.
Instead, I ran vgchange -ay manually, exited the shell and finished booting the system as expected.

Now, does anyone know if it's possible to access init.log from the booted system, or has the initramdisk been wiped out of memory?

[EDIT] Maybe the init.log can tell me what went wrong when the init script tried to scan and init the volumes. Also, I might try to use the scandelay= kernel line option.
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54300
Location: 56N 3W

PostPosted: Tue Jan 19, 2021 10:30 am    Post subject: Reply with quote

Vieri,

The initramfs is abandoned when it exits with the pivotroot command.
The may be a kernel command line parameter to keep it.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
alamahant
Advocate
Advocate


Joined: 23 Mar 2019
Posts: 3879

PostPosted: Tue Jan 19, 2021 10:36 am    Post subject: Reply with quote

Is your /boot a separate partition?
If it is on encrypted lvm then you need to add additional parameters to /etc/default/grub
It is not a good idea to have encrypted or lvm /boot
Maybe add
Code:

GRUB_ENABLE_CRYPTODISK=y
GRUB_PRELOAD_MODULES="part_msdos part_gpt luks luks2 lvm"
 


Is your /dev/vgroot/root encrypted?
Did you use luks1 or luks2?
Your /boot should ONLY be luks1
Can you please post the output of
Code:

lsblk -fs

_________________
:)


Last edited by alamahant on Tue Jan 19, 2021 10:57 am; edited 1 time in total
Back to top
View user's profile Send private message
Vieri
l33t
l33t


Joined: 18 Dec 2005
Posts: 886

PostPosted: Tue Jan 19, 2021 10:55 am    Post subject: Reply with quote

I thought initramfs was gone too. So I guess I'll need to reboot and check init.log to see if I can get something out of it. I will also try to add the scandelay= option, but that will be in a few days.

Meanwhile, the output of blkid (run after running vgchange manually, exiting "rescueshell" and logging in the system) is:

Code:
# lsblk -fs
NAME                 FSTYPE            FSVER LABEL    UUID                                   FSAVAIL FSUSE% MOUNTPOINT
sda2                 vfat                             A483-1682
└─sda
sdb2                 vfat                             A485-1FD7
└─sdb
sdc1                 vfat                    KINGSTON 3C61-F846
└─sdc
md1
├─sda1               linux_raid_member                50191324-5a36-9401-cb20-1669f728008a
│ └─sda
└─sdb1               linux_raid_member                50191324-5a36-9401-cb20-1669f728008a
  └─sdb
md3                  ext2                             e54bbc62-bd17-487a-853a-ed14d0702a71
├─sda3               linux_raid_member                a1c952fe-ccf5-ea8d-cb20-1669f728008a
│ └─sda
└─sdb3               linux_raid_member                a1c952fe-ccf5-ea8d-cb20-1669f728008a
  └─sdb
md4                  swap                             cdb2cd63-f80d-4dc0-b71f-32040e820de2                  [SWAP]
├─sda4               linux_raid_member                644f2899-7ab5-22bd-cb20-1669f728008a
│ └─sda
└─sdb4               linux_raid_member                644f2899-7ab5-22bd-cb20-1669f728008a
  └─sdb
vgroot-root          ext4                             f6ad07cc-7a5c-4b62-9783-15f36e9ac00a    158.6G     8% /
├─vgroot-root_rmeta_0

│ └─sda5             LVM2_member                      PySdiK-NJWq-xiDr-vnxV-Eam7-eJ8f-gsGXHn
│   └─sda
├─vgroot-root_rimage_0
│                    ext4                             f6ad07cc-7a5c-4b62-9783-15f36e9ac00a
│ └─sda5             LVM2_member                      PySdiK-NJWq-xiDr-vnxV-Eam7-eJ8f-gsGXHn
│   └─sda
├─vgroot-root_rmeta_1

│ └─sdb5             LVM2_member                      xFq6vW-lECT-2VUY-ZqNJ-9tIZ-uC55-dlFbnS
│   └─sdb
└─vgroot-root_rimage_1
                     ext4                             f6ad07cc-7a5c-4b62-9783-15f36e9ac00a
  └─sdb5             LVM2_member                      xFq6vW-lECT-2VUY-ZqNJ-9tIZ-uC55-dlFbnS
    └─sdb


So, yes, /boot is a seperate partition and is in mdadm RAID1.
The ROOT LVM is not encrypted.
No LUKS.
Back to top
View user's profile Send private message
alamahant
Advocate
Advocate


Joined: 23 Mar 2019
Posts: 3879

PostPosted: Tue Jan 19, 2021 11:01 am    Post subject: Reply with quote

Would it be awfully terrible if you created a simple plain no-raid /boot partition and used that instead of the raid one?
Because as I see it you are now struggling with two tricky things instead of one.
/ on raid lvm
and /boot on raid.
Just eliminate the second and see if the first also is rectified.
You will need ofcourse to reinstall and update grub and create a new initramfs.
Maybe you can try entering
Code:

GRUB_PRELOAD_MODULES="part_msdos part_gpt mdraid1x lvm"

in /etc/default/grub
before changing your partitions though....
_________________
:)
Back to top
View user's profile Send private message
Vieri
l33t
l33t


Joined: 18 Dec 2005
Posts: 886

PostPosted: Thu Jan 21, 2021 12:46 pm    Post subject: Reply with quote

The kernel parameter scandelay solved the issue.

If I install grub with the following then all works fine:

Code:
# grep ^GRUB_CMDLINE_LINUX /etc/default/grub
GRUB_CMDLINE_LINUX="domdadm dolvm scandelay=1"


Without scandelay=1 (or more) initramfs is unable to start the volume groups.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Installing Gentoo All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum