View previous topic :: View next topic |
Author |
Message |
alamahant Advocate
Joined: 23 Mar 2019 Posts: 3918
|
Posted: Thu Apr 15, 2021 12:38 pm Post subject: Some thoughts on root on zfs installations |
|
|
Hi Guys,
Recently i played around a bit with root on zfs installations.
I installed Gentoo openrc Debian Arch Devuan and Artix on zfs
Here is what I found out:
1.Luks+root on zfs works only in systemd systems.
However much I tried to install luks encrypted root on zfs on open rc I ALWAYS had problems both with dracut and genkernel.
I did not however triy with blissinitramfs.
Plain root on zfs though works in openrc with no problem.
2.Dracut works perfectly and no reason to mess with genkernel whatsoever.
3.When creating the root pool do not use the Gentoo or Funtoo given stanza.Instead use the stanza for Debian root on zfs.
https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Buster%20Root%20on%20ZFS.html
Something like
Code: |
zpool create \
-o ashift=12 \
-O acltype=posixacl -O canmount=off -O compression=lz4 \
-O dnodesize=auto -O normalization=formD -O relatime=on \
-O xattr=sa -O mountpoint=/ -R /mnt \
rpool /dev/mapper/luks1
|
If you choose to do it the Gentoo way you will keep getting broken symlinks from /usr/lib and /usr/lib64 to /lib and /lib64 and all the time /etc/portage/make.profile will be broken.
I think it might have something to do with
directive.
Generally speaking zfs is a bit hysteric quirky and mean.
This is what I use to create rpools lately
Code: |
#!/bin/bash
disk="/dev/sdc13"
pool_name="gentoorpool"
distro_name="gentoo"
chrootdir_name="/mnt/zmount"
if ! modprobe zfs 2> /dev/null;then
echo "COULD NOT LOAD ZFS MODULE"
exit
fi
num_items=$(ls -A $chrootdir_name | wc -l)
[ $num_items -ne 0 ] && echo "$chrootdir_name is not empty,exiting" && exit
zpool create -o ashift=12 -O acltype=posixacl -O canmount=off -O compression=lz4 -O dnodesize=auto -O normalization=formD \
-O relatime=on -O xattr=sa -o cachefile=/etc/zfs/$pool_name.cache \
-O mountpoint=/ -R $chrootdir_name $pool_name $disk || exit
zfs create -o canmount=off -o mountpoint=none $pool_name/ROOT
zfs create -o canmount=noauto -o mountpoint=/ $pool_name/ROOT/$distro_name
zfs mount $pool_name/ROOT/$distro_name
zfs create $pool_name/home
zfs create -o mountpoint=/root $pool_name/home/root
chmod 700 $chrootdir_name/root
zfs create -o canmount=off $pool_name/var
zfs create -o canmount=off $pool_name/var/lib
zfs create $pool_name/var/log
#zfs create $pool_name/var/spool
zfs create -o com.sun:auto-snapshot=false $pool_name/var/cache
zfs create -o com.sun:auto-snapshot=false $pool_name/var/tmp
chmod 1777 $chrootdir_name/var/tmp
zfs create $pool_name/opt
zfs create $pool_name/srv
#zfs create -o canmount=off $pool_name/usr
#zfs create $pool_name/usr/local
zfs create $pool_name/var/mail
zfs create $pool_name/var/www
zfs create -o com.sun:auto-snapshot=false $pool_name/var/lib/docker
zfs create -o com.sun:auto-snapshot=false $pool_name/var/lib/lxd
zfs create -o com.sun:auto-snapshot=false $pool_name/var/lib/flatpak
zfs create -o com.sun:auto-snapshot=false $pool_name/var/lib/nfs
mkdir $chrootdir_name/run
mount -t tmpfs tmpfs $chrootdir_name/run
mkdir $chrootdir_name/run/lock
zpool set bootfs=$pool_name/ROOT/$distro_name $pool_name
mkdir -p $chrootdir_name/etc/zfs
cp /etc/zfs/$pool_name.cache $chrootdir_name/etc/zfs/zpool.cache
|
4.I alway use a separate ext4 /boot partition NOT a zfs pool for /boot.
5./etc/default/grub should look like this
Code: |
GRUB_CMDLINE_LINUX="cryptdevice=UUID=<>:cryptvolume-name root=ZFS=<pool-name>/ROOT/<distroname> vfs.zfs.check_hostid=0 zfsforce=1.........................." ###For dracut luks+root-on-zfs
GRUB_CMDLINE_LINUX="dozfs crypt_root=UUID=<>:cryptvolume-name real_root=ZFS=<pool-name>/ROOT/<distroname> vfs.zfs.check_hostid=0 zfsforce=1.........................." ###For genkernel luks+root-on-zfs
GRUB_CMDLINE_LINUX="<dozfs--for genkernel> root=ZFS=<pool-name>/ROOT/<distroname> vfs.zfs.check_hostid=0 zfsforce=1.........................." ###For both dracut and genkernel PLAIN root-on-zfs
.
.
.
GRUB_PRELOAD_MODULES="part_msdos part_gpt luks zfs"
|
6. In fstab do NOT reference ANY zfs dataset.Just the /boot /boot/efi and any other non-zfs partitions you need mounted.
7.For chrooting into a zfs installation I use something like this
Code: |
#!/bin/bash
disk="/dev/sdc13"
pool_name="gentoorpool"
distro_name="gentoo"
chrootdir="/mnt/zmount"
modprobe zfs
zpool import -f -d $disk -R $chrootdir $pool_name
zfs mount $pool_name/ROOT/$distro_name
zfs mount -a
mount UUID=</boot-partition> $chrootdir/boot
mount --types proc /proc $chrootdir/proc/
mount --rbind /sys $chrootdir/sys/
mount --make-rslave $chrootdir/sys/
mount --rbind /dev $chrootdir/dev/
mount --make-rslave $chrootdir/dev/
mount --bind /dev/pts $chrootdir/dev/pts/
chroot $chrootdir /bin/bash --login
|
8.For unmounting chroot
Code: |
#!/bin/bash
pool_name="gentoorpool"
chrootdir="/mnt/zmount"
umount -R $chrootdir
umount -R $chrootdir
umount -R $chrootdir
zpool export $pool_name
|
9.In multiboot environments use
efibootmgr -n <root-on-zfs-id>
to boot into your zfs system because a normal grub update would not usually see the zfs system.
Is root on zfs worth it?
Although zfs feels a bit hysteric it is FUN and you can take snapshots before updating etc.
It takes a lot of memory but it is fun.... _________________
Last edited by alamahant on Mon Jun 07, 2021 5:15 pm; edited 2 times in total |
|
Back to top |
|
|
pjp Administrator
Joined: 16 Apr 2002 Posts: 20488
|
Posted: Mon Jun 07, 2021 3:57 pm Post subject: |
|
|
alamahant wrote: | Generally speaking zfs is a bit hysteric quirky and mean. | Do you have any idea why this is? Is it still too new on Linux to have been refined? Is it possibly related to the "distributed development feature flag versioning", or perhaps something else? Any idea if it behaves similarly on FreeBSD (I'd guess not for Illumos and others OpenSolaris based)?
For me, the primary advantage of ZFS was its simplicity and ease of management (excluding tuning for specialized workloads). ZoL seems to have eliminated that benefit. That it has trouble with something as simple as symlinks is concerning and makes me wonder how well boot environments work. I guess I'll have to forego using it and wait until Someday when to test it in a VM.
Thanks for the write-up, I was considering trying to implement it on a system that I'm having to migrate (HDD errors), but I'm going to wait for a different opportunity. _________________ Quis separabit? Quo animo? |
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|