View previous topic :: View next topic |
Author |
Message |
zen_desu Tux's lil' helper
Joined: 25 Oct 2024 Posts: 80
|
Posted: Thu Dec 26, 2024 4:50 am Post subject: genTree |
|
|
https://github.com/desultory/genTree
I've been working on this for the last month or so, but more so in the last week.
It's sorta like catalyst, but runs entirely unprivileged in a user namespace. I may add it to GURU soon, and would appreciate feedback.
One nice thing about it is that it builds in layers which should be OCI compatible, and builds packages by default so even if layers can't be reused, packages can. _________________ µgRD dev
Wiki writer |
|
Back to top |
|
|
pingtoo Veteran
Joined: 10 Sep 2021 Posts: 1345 Location: Richmond Hill, Canada
|
Posted: Thu Dec 26, 2024 5:09 am Post subject: |
|
|
Can you share what is difference between catalyst vs gentree?
How is benefit using gentree?
A quick look it is not relate to container, so what is that layer you talk about? |
|
Back to top |
|
|
zen_desu Tux's lil' helper
Joined: 25 Oct 2024 Posts: 80
|
Posted: Thu Dec 26, 2024 5:26 am Post subject: |
|
|
pingtoo wrote: | Can you share what is difference between catalyst vs gentree?
How is benefit using gentree?
A quick look it is not relate to container, so what is that layer you talk about? |
It runs the entire process in a user namespace (more or less a container): Code: | nsexec(genTree.build_tree) |
where nsexec is a python function i wrote which executes a function in a namespace. It later does some mounts and chroots:
https://github.com/desultory/zenlib/pull/9/files
Code: | def init_namespace(self):
"""Initializes the namespace for the current config"""
self.logger.info("[%s] Initializing namespace", colorize(self.config.name, "blue"))
self.mount_seed_overlay()
self.mount_system_dirs()
self.bind_mount(self.config.system_repos, self.config.sysroot / "var/db/repos")
self.bind_mount("/etc/resolv.conf", self.config.sysroot / "etc/resolv.conf", file=True)
self.bind_mount(self.config.pkgdir, self.config.sysroot / "var/cache/binpkgs", readonly=False)
self.bind_mount(self.config.build_dir, self.config.build_mount, recursive=True, readonly=False)
self.bind_mount(self.config.config_dir, self.config.config_mount, recursive=True, readonly=False)
self.logger.info("Chrooting into: %s", colorize(self.config.sysroot, "red"))
chroot(self.config.sysroot) |
Concerning containers, it makes image layers as OCI compatible layers, where deleted files are marked as ".wh.<filename>" and if that file is detected, that file is deleted from lower layers when deploying that layer.
https://github.com/opencontainers/image-spec/blob/main/layer.md#whiteouts
https://github.com/desultory/genTree/blob/main/src/genTree/oci_mixins.py
https://github.com/desultory/genTree/blob/main/src/genTree/gen_tree_tar_filter.py
The oci filter is used for importing layers, while the generic tar filter is used for packing.
One advantage is that it's much easier to get started, you can import a stage3 as a "seed" then use that with simple configs to build minimal filesystem images which can be used for containers.
It does not require root to run.
Some output may make it more clear:
Code: | desu@amazon /mnt/closet/genTree $ genTree nginx.toml
INFO | [nginx] Initializing namespace
INFO | Mounting overlayfs on: /mnt/closet/genTree/seeds/stage3-openrc_sysroot
INFO | [nginx] Mounting system directories in: /mnt/closet/genTree/seeds/stage3-openrc_sysroot
INFO | Mounting /proc over: /mnt/closet/genTree/seeds/stage3-openrc_sysroot/proc
INFO | Mounting /sys over: /mnt/closet/genTree/seeds/stage3-openrc_sysroot/sys
INFO | Mounting /dev over: /mnt/closet/genTree/seeds/stage3-openrc_sysroot/dev
INFO | Mounting /run over: /mnt/closet/genTree/seeds/stage3-openrc_sysroot/run
INFO | Mounting /var/db/repos over: /mnt/closet/genTree/seeds/stage3-openrc_sysroot/var/db/repos
INFO | Mounting /etc/resolv.conf over: /mnt/closet/genTree/seeds/stage3-openrc_sysroot/etc/resolv.conf
INFO | Mounting /mnt/closet/genTree/pkgdir over: /mnt/closet/genTree/seeds/stage3-openrc_sysroot/var/cache/binpkgs
INFO | Mounting /mnt/closet/genTree/builds over: /mnt/closet/genTree/seeds/stage3-openrc_sysroot/builds
INFO | Mounting /mnt/closet/genTree/config over: /mnt/closet/genTree/seeds/stage3-openrc_sysroot/config
INFO | Chrooting into: /mnt/closet/genTree/seeds/stage3-openrc_sysroot
INFO | Building tree for: nginx
INFO | [nginx.toml] Building base: tini
INFO | [tini.toml] Building base: glibc
INFO | [glibc.toml] Building base: base
WARNING | [base] Skipping build, layer archive exists: /builds/base.tar
WARNING | [glibc] Skipping build, layer archive exists: /builds/glibc.tar
WARNING | [tini] Skipping build, layer archive exists: /builds/tini.tar
INFO | [nginx.toml] Building base: gcc
INFO | [gcc.toml] Building base: base
WARNING | [base] Skipping build, layer archive exists: /builds/base.tar
INFO | [base] Unpacking base layer to build root: /builds/gcc_lower
INFO | [gcc] Mounting build overlayfs on: /builds/gcc
INFO | [gentoo] Setting portage profile: default/linux/amd64/23.0
INFO | [gcc] emerge --root /builds/gcc --jobs 8 --verbose=y --nodeps --usepkg=y --with-bdeps=n sys-devel/gcc
INFO | [gcc] Packing tree: /builds/gcc_upper
INFO | [gcc] Created archive: /builds/gcc.tar (313.49 MB)
WARNING | [nginx] Cleaning root: /builds/nginx
WARNING | [nginx] Cleaning root: /builds/nginx_lower
WARNING | [nginx] Cleaning root: /builds/nginx_work
WARNING | [nginx] Cleaning root: /builds/nginx_upper
INFO | [base] Unpacking base layer to build root: /builds/nginx_lower
INFO | [glibc] Unpacking base layer to build root: /builds/nginx_lower
INFO | [tini] Unpacking base layer to build root: /builds/nginx_lower
INFO | [gcc] Unpacking base layer to build root: /builds/nginx_lower
INFO | [nginx] Mounting build overlayfs on: /builds/nginx
INFO | [nginx] Mounting config overlay: /config/nginx
INFO | [gentoo] Setting portage profile: default/linux/amd64/23.0
INFO | [nginx] emerge --root /builds/nginx --jobs 8 --verbose=y --usepkg=y --with-bdeps=n www-servers/nginx
INFO | [nginx] Unmerging packages: sys-devel/gcc
INFO | [nginx] emerge --root /builds/nginx --unmerge sys-devel/gcc
INFO | [nginx] Packing tree: /builds/nginx
INFO | [nginx] Created archive: /builds/nginx.tar (178.42 MB) |
_________________ µgRD dev
Wiki writer |
|
Back to top |
|
|
pingtoo Veteran
Joined: 10 Sep 2021 Posts: 1345 Location: Richmond Hill, Canada
|
Posted: Fri Dec 27, 2024 3:13 pm Post subject: |
|
|
I am curious, How those .tar file came about? Do they get rebuild ever time or they got download from somewhere?
I use catalyst for my system update and I do this rebuild in a docker container. So I have principle understanding of this but I don't get a sense of why doing you approach. Especially layering them, is your vision that these layers were kept some how they can be brought in use some configuration parameters?
I read the source code but did not find how .tar file came to play. |
|
Back to top |
|
|
zen_desu Tux's lil' helper
Joined: 25 Oct 2024 Posts: 80
|
Posted: Fri Dec 27, 2024 4:58 pm Post subject: |
|
|
pingtoo wrote: | I am curious, How those .tar file came about? Do they get rebuild ever time or they got download from somewhere?
I use catalyst for my system update and I do this rebuild in a docker container. So I have principle understanding of this but I don't get a sense of why doing you approach. Especially layering them, is your vision that these layers were kept some how they can be brought in use some configuration parameters?
I read the source code but did not find how .tar file came to play. |
It builds those tar files.
So it starts with the "seed". that could be a stage3. It uses that as the "lowerdir" for an overlayfs which it chroots into. This means that seed works as the base for the sysroot, but can be modified. I added an option "seed_clean" which wipes the upper layer before chrooting if you want a clean build.
Once chrooted, it mounts each "layer build" in an overlayfs and does emerge --root there. This installs those packages to that dir, and the "upperdir" only keeps changes from the lower dir. If layers are built on other layers, portage already sees the deps there and doesn't rebuild that. When packed, it only gets the difference between the build layer and below it. If there was a deletion, it adds a "whiteout" file telling layer layers that certain files should be deleted.
It does not download any images, i just saves these "layer diffs" to tarballs and can reuse them (by name). I'm considering making some kind of hashing scheme so things can be reused more easily, or manifests. I'm not sure.
The main point of the layering is to help reuse more stuff, especially across final images. Using layers is much, much faster than portage with a package cache, because it can just extract and move on.
From gen_tree_config:
Code: |
@property
def buildname(self):
return f"{self.seed}-{self.name}"
@property
def overlay_root(self):
return Path("/builds") / self.buildname
@property
def layer_archive(self):
return self.overlay_root.with_suffix(self.archive_extension)
@property
def output_archive(self):
if self.output_file:
return Path("/builds") / self.output_file
return self.overlay_root.with_stem(f"{self.buildname}-full").with_suffix(self.archive_extension)
|
From genTree:
Code: | def build(self, config):
"""Builds all bases and branches under the current config
Builds/installs packages in the config build root
Unmerges packages in the config unmerge list
Packs the build tree into the config layer archive if no_pack is False"""
self.build_bases(config=config)
if config.layer_archive.exists() and not config.rebuild:
return config.logger.warning(
" ... [%s] Skipping build, layer archive exists: %s",
colorize(config.name, "blue"),
colorize(config.layer_archive, "cyan"),
)
self.prepare_build(config=config)
self.deploy_bases(config=config)
self.mount_root_overlay(config=config)
self.mount_config_overlay(config=config)
config.set_portage_profile()
config.set_portage_env()
self.perform_emerge(config=config)
self.perform_unmerge(config=config)
config.cleaner.clean(config.overlay_root)
self.pack(config=config)
|
_________________ µgRD dev
Wiki writer
Last edited by zen_desu on Fri Dec 27, 2024 6:02 pm; edited 1 time in total |
|
Back to top |
|
|
pingtoo Veteran
Joined: 10 Sep 2021 Posts: 1345 Location: Richmond Hill, Canada
|
Posted: Fri Dec 27, 2024 5:17 pm Post subject: |
|
|
I don't code python, so a lots of my understanding is guess.
Is the end result tar ball you envision to be use for create a OCI image? for example docker import <gentree result .tar>?
And is the result tar ball filter out all the gentoo specific articles? i.e. it become a generic (none distrros special) OCI image?
I see you rearranged you code since my last time. Do you plan to support pick a "portage tree" from one specific past? I see the code is picking current /var/db/repos.
Thank you for sharing your code, I learn something from your design. |
|
Back to top |
|
|
zen_desu Tux's lil' helper
Joined: 25 Oct 2024 Posts: 80
|
Posted: Fri Dec 27, 2024 5:48 pm Post subject: |
|
|
pingtoo wrote: | I don't code python, so a lots of my understanding is guess.
Is the end result tar ball you envision to be use for create a OCI image? for example docker import <gentree result .tar>?
|
I mostly made this to make images for LXC containers, but it should work with docker or whatever else.
lxc-create likes xz'd files, so i've just been running xz on the tar genTree makes, then I can run:
Code: | lxc-create -t local -B btrfs -n unbound-new -- --fstree /mnt/closet/genTree/unbound.tar.xz |
pingtoo wrote: | And is the result tar ball filter out all the gentoo specific articles? i.e. it become a generic (none distrros special) OCI image?
|
I added some filters which can help to do this. I generally keep the vardbpkg stuff while making layers because it helps portage. In the final layer, it can be removed to save space as it's not needed if there will be no emerges over this layer.
pingtoo wrote: |
I see you rearranged you code since my last time. Do you plan to support pick a "portage tree" from one specific past? I see the code is picking current /var/db/repos.
|
Yeah, sorry, I'm changing a lot and improving it as I go. I've been generalizing some of the config a bit more and added a "clean" phase which deletes stuff before packing, to force "whiteouts" to be made.
Currently, it should be able to use any arbitrary path from the system as "/var/db/repos", setting 'system_repos: Path = "/var/db/repos"'. Additionally, you can import any "seed" you want. It can be a tarball released today, or years ago. I think one of the thing I'm going to add next is "seed update" options, where it updates the upper layer of the seed before running builds.
pingtoo wrote: |
Thank you for sharing your code, I learn something from your design. |
You're welcome, I hope it's relatively easy to understand, the main reason I had a hard time using catalyst is because it's a lot of bash and python, and it requires root to use. I wanted to use something that has simpler config, didn't require root, and has more options for removing "unneeded" components from images.
Here's how it builds that unbound image:
Code: |
desu@amazon /mnt/closet/genTree $ time genTree unbound.toml
INFO | [unbound] Initializing namespace
INFO | ~/* Mounting seed overlay on: /home/desu/.local/share/genTree/seeds/stage3-hardened_sysroot
INFO | *v* Mounting system directories in: /home/desu/.local/share/genTree/seeds/stage3-hardened_sysroot
INFO | *>* Mounting /proc over: /home/desu/.local/share/genTree/seeds/stage3-hardened_sysroot/proc
INFO | *>* Mounting /sys over: /home/desu/.local/share/genTree/seeds/stage3-hardened_sysroot/sys
INFO | *>* Mounting /dev over: /home/desu/.local/share/genTree/seeds/stage3-hardened_sysroot/dev
INFO | *>* Mounting /run over: /home/desu/.local/share/genTree/seeds/stage3-hardened_sysroot/run
INFO | +>+ Mounting /var/db/repos over: /home/desu/.local/share/genTree/seeds/stage3-hardened_sysroot/var/db/repos
INFO | .>. Mounting /etc/resolv.conf over: /home/desu/.local/share/genTree/seeds/stage3-hardened_sysroot/etc/resolv.conf
INFO | +-+ Mounting /home/desu/.local/share/genTree/pkgdir over: /home/desu/.local/share/genTree/seeds/stage3-hardened_sysroot/var/cache/binpkgs
INFO | *-* Mounting /home/desu/.local/share/genTree/builds over: /home/desu/.local/share/genTree/seeds/stage3-hardened_sysroot/builds
INFO | *-* Mounting /home/desu/.local/share/genTree/config over: /home/desu/.local/share/genTree/seeds/stage3-hardened_sysroot/config
INFO | -/~ Chrooting into: /home/desu/.local/share/genTree/seeds/stage3-hardened_sysroot
INFO | +++ Building tree for: unbound
INFO | +.+ [unbound.toml] Building base: tini
INFO | +.+ [tini.toml] Building base: glibc-stripped
INFO | +.+ [glibc-stripped.toml] Building base: base
WARNING | -.- [base] Cleaning root: /builds/stage3-hardened-base
INFO | =^= [base] Mounting build overlay on: /builds/stage3-hardened-base
INFO | ~-~ [gentoo] Setting portage profile: default/linux/amd64/23.0/no-multilib/hardened
INFO | .~. [base] Setting USE flags: build
INFO | [E] [base] emerge --root /builds/stage3-hardened-base --jobs=8 --verbose=y --usepkg=y --with-bdeps=n --oneshot baselayout
INFO | >:- [base] Packing tree: /builds/stage3-hardened-base.tar
INFO | [base] Created archive: /builds/stage3-hardened-base.tar (0.29 MB)
WARNING | -.- [glibc-stripped] Cleaning root: /builds/stage3-hardened-glibc-stripped
INFO | =^= [glibc-stripped] Mounting build overlay on: /builds/stage3-hardened-glibc-stripped
INFO | ~-~ [gentoo] Setting portage profile: default/linux/amd64/23.0/no-multilib/hardened
INFO | [E] [glibc-stripped] emerge --root /builds/stage3-hardened-glibc-stripped --jobs=8 --verbose=y --nodeps --usepkg=y --with-bdeps=n --oneshot sys-libs/glibc
INFO | >:- [glibc-stripped] Packing tree: /builds/stage3-hardened-glibc-stripped.tar
INFO | [glibc-stripped] Created archive: /builds/stage3-hardened-glibc-stripped.tar (17.60 MB)
WARNING | -.- [tini] Cleaning root: /builds/stage3-hardened-tini
INFO | =^= [tini] Mounting build overlay on: /builds/stage3-hardened-tini
INFO | ~-~ [gentoo] Setting portage profile: default/linux/amd64/23.0/no-multilib/hardened
INFO | [E] [tini] emerge --root /builds/stage3-hardened-tini --jobs=8 --verbose=y --usepkg=y --with-bdeps=n sys-process/tini
INFO | >:- [tini] Packing tree: /builds/stage3-hardened-tini.tar
INFO | [tini] Created archive: /builds/stage3-hardened-tini.tar (0.83 MB)
INFO | +.+ [unbound.toml] Building base: openssl
INFO | +.+ [openssl.toml] Building base: ca-certificates
WARNING | -.- [ca-certificates] Cleaning root: /builds/stage3-hardened-ca-certificates
INFO | =^= [ca-certificates] Mounting build overlay on: /builds/stage3-hardened-ca-certificates
INFO | ~-~ [gentoo] Setting portage profile: default/linux/amd64/23.0/no-multilib/hardened
INFO | [E] [ca-certificates] emerge --root /builds/stage3-hardened-ca-certificates --jobs=8 --verbose=y --nodeps --usepkg=y --with-bdeps=n --oneshot app-misc/ca-certificates
INFO | >:- [ca-certificates] Packing tree: /builds/stage3-hardened-ca-certificates.tar
INFO | [ca-certificates] Created archive: /builds/stage3-hardened-ca-certificates.tar (1.31 MB)
WARNING | -.- [openssl] Cleaning root: /builds/stage3-hardened-openssl
INFO | =^= [openssl] Mounting build overlay on: /builds/stage3-hardened-openssl
INFO | ~-~ [gentoo] Setting portage profile: default/linux/amd64/23.0/no-multilib/hardened
INFO | [E] [openssl] emerge --root /builds/stage3-hardened-openssl --jobs=8 --verbose=y --usepkg=y --with-bdeps=n --oneshot dev-libs/openssl
INFO | >:- [openssl] Packing tree: /builds/stage3-hardened-openssl.tar
INFO | [openssl] Created archive: /builds/stage3-hardened-openssl.tar (8.10 MB)
WARNING | -.- [unbound] Cleaning root: /builds/stage3-hardened-unbound
INFO | =^= [unbound] Mounting build overlay on: /builds/stage3-hardened-unbound
INFO | =-= [unbound] Mounting config overlay on: /config/unbound
INFO | ~-~ [gentoo] Setting portage profile: default/linux/amd64/23.0/no-multilib/hardened
INFO | [E] [unbound] emerge --root /builds/stage3-hardened-unbound --jobs=8 --verbose=y --usepkg=y --with-bdeps=n net-dns/unbound
INFO | >:- [unbound] Packing tree: /builds/stage3-hardened-unbound.tar
INFO | [unbound] Created archive: /builds/stage3-hardened-unbound.tar (5.88 MB)
INFO | V:V [unbound] Packing all layers into: /builds/stage3-hardened-unbound-full.tar
INFO | #%- [unbound] Packing bases: /builds/stage3-hardened-base.tar, /builds/stage3-hardened-glibc-stripped.tar, /builds/stage3-hardened-tini.tar, /builds/stage3-hardened-ca-certificates.tar, /builds/stage3-hardened-openssl.tar, /builds/stage3-hardened-unbound.tar
INFO | ~%> [unbound] Refiltering archive: /builds/stage3-hardened-unbound-full.tar (33.83 MB)
INFO | [unbound] Created final archive: /builds/stage3-hardened-unbound-full.tar (32.44 MB)
real 0m41.878s
user 0m29.973s
sys 0m19.802s
|
_________________ µgRD dev
Wiki writer |
|
Back to top |
|
|
pingtoo Veteran
Joined: 10 Sep 2021 Posts: 1345 Location: Richmond Hill, Canada
|
Posted: Fri Dec 27, 2024 6:01 pm Post subject: |
|
|
Thank you very much for the explain.
My plan is to build a 'distcc' helper container with compiler tool chain. I been thinking using overlayfs because I want the distcc container only have distccd + gcc/binutil/glibc.
I should use your code as base (not necessary execute the gentee code, but using its logic).
In my recent development (for container image creation) I found using rsync (with --include and --exclude) to perform filtering make the process looks clean. May be you can consider it too.
Thanks. |
|
Back to top |
|
|
zen_desu Tux's lil' helper
Joined: 25 Oct 2024 Posts: 80
|
Posted: Fri Dec 27, 2024 6:07 pm Post subject: |
|
|
pingtoo wrote: | Thank you very much for the explain.
My plan is to build a 'distcc' helper container with compiler tool chain. I been thinking using overlayfs because I want the distcc container only have distccd + gcc/binutil/glibc.
I should use your code as base (not necessary execute the gentee code, but using its logic).
In my recent development (for container image creation) I found using rsync (with --include and --exclude) to perform filtering make the process looks clean. May be you can consider it too.
Thanks. |
I'm doing this on a single sever, I tried distcc in the past but it didn't seem to be much faster.
If you wanted to make containers which only have compiler utils, this scheme may work very well, because you could make layers for toolchains which differ slightly.
Using an overlayfs is mostly nice if you're:
a) trying to write over something that you'd normally only have read privs for
b) tracking changes made over another fs tree without affecting it
Here, I sorta use both, but mostly the change tracking to make it easier to make layers which only have what is strictly required.
Part of the reason I am focusing on the filtering quite a bit is because you can do fancier filtering if you are inspecting every file/header as it's added to that tarfile. It's surprisingly not that slow in python, I mean xz compressing it using the xz util at the end usually takes longer than it takes for python to pack/filter/refilter tars (not that this is a fair comparison, just that filtering isn't the slowest part and can make xz's job easier)
One part of the design which may not be obvious is that all mounts it makes are unmounted when the namespaced process dies (just the build tree method). This means that changes may persist in upperdirs but the overlay and bind mounts are gone. I think this is nice to help keep cleanup simpler. The early versions required root and I kept nearly deleting system repos and stuff cleaning things, and would have mount errors mounting on points which already have a mount. I feel small things like this can make it much easier to use. _________________ µgRD dev
Wiki writer |
|
Back to top |
|
|
pjp Administrator
Joined: 16 Apr 2002 Posts: 20533
|
Posted: Fri Dec 27, 2024 8:31 pm Post subject: Re: genTree |
|
|
zen_desu wrote: | It's sorta like catalyst, but runs entirely unprivileged in a user namespace. I may add it to GURU soon, and would appreciate feedback. | Interesting! catalyst requiring root and a lack of documentation is why I don't use it.
Based on other comments here, is genTree suitable to manage chroots? More specifically the chroots are used to maintain binary packages for other systems. I've also been meaning to make an install / rescue iso image.
Since you mentioned containers as your primary goal, I'm curious how flexible it is for non-container use. _________________ Quis separabit? Quo animo? |
|
Back to top |
|
|
zen_desu Tux's lil' helper
Joined: 25 Oct 2024 Posts: 80
|
Posted: Fri Dec 27, 2024 8:38 pm Post subject: Re: genTree |
|
|
pjp wrote: | zen_desu wrote: | It's sorta like catalyst, but runs entirely unprivileged in a user namespace. I may add it to GURU soon, and would appreciate feedback. | Interesting! catalyst requiring root and a lack of documentation is why I don't use it.
Based on other comments here, is genTree suitable to manage chroots? More specifically the chroots are used to maintain binary packages for other systems. I've also been meaning to make an install / rescue iso image.
Since you mentioned containers as your primary goal, I'm curious how flexible it is for non-container use. |
The way I see it, making a "full" container image (as this does) is comparable to a full "system". The only bits that are missing, if you wanted it to be bootable, are a filesystem and boot related things.
I'm considering if I want to make a separate sort of system for managing boot related things.
The lack of documentation, like you mentioned, is another reason I find catalyst hard to use. I rewrote most of this page: https://wiki.gentoo.org/wiki/Catalyst/Custom_Media_Image
One of the real painful parts is that most spec files I found are made by releng and some portions are filled in by their tooling. The process to get catalyst alone working for something simple was rather complex imo, and a lot of the documentation that does exist is rather out of date.
I was using catalyst a bit and was able to make it make images that _mostly_ work with ugrd to make a bootable ISO, but there is no way to simply copy a file in. I would have to make an alternate ebuild with different default config to do what I want. If I were able to make a "ugrd-with-livecd-config" layer, that could be used instead of a package, and would be easier to maintain imo.
I'm not sure what you mean by managing chroots, it kinda does that with the "seed" usage, as in it makes an overlayfs over the seed, and chroots into that, and currently cleans the upper layer unless you tell it not to. I think the next thing I'm going to add is the ability to run a "seed update" sequence, which may just be a "layer" target which emerges that stuff to the host system. I may restrict the "cleaner" usage so it doesn't do something like nuke your mounted /config or /build dirs.
You can set a different "pkgdir" in the config, or use a "config overlay" where that is defined in the make.conf. Currently, it keeps all packages in one dir unless you set an override. I think each "seed" could have its own package dir if the mounting were toggled here:
https://github.com/desultory/genTree/blob/main/src/genTree/genTree.py#L298
Code: |
def init_namespace(self):
"""Initializes the namespace for the current config
If clean_seed is True, cleans the seed overlay upper and work dirs"""
self.logger.info("[%s] Initializing namespace", colorize(self.config.name, "blue"))
if self.config.clean_seed:
self.clean_seed_overlay()
self.mount_seed_overlay()
self.mount_system_dirs()
self.bind_mount(self.config.system_repos, self.config.sysroot / "var/db/repos")
self.bind_mount("/etc/resolv.conf", self.config.sysroot / "etc/resolv.conf", file=True)
---> self.bind_mount(self.config.pkgdir, self.config.sysroot / "var/cache/binpkgs", readonly=False)
self.bind_mount(self.config.build_dir, self.config.build_mount, recursive=True, readonly=False)
self.bind_mount(self.config.config_dir, self.config.config_mount, recursive=True, readonly=False)
self.logger.info(" -/~ Chrooting into: %s", colorize(self.config.sysroot, "red"))
chroot(self.config.sysroot)
|
I could possibly just have it mount pkgdirs like config overlays, where you do it by name and it has dirs with that name under the config root. I have it using one pkgdir by default to hopefully reduce rebuilds. I'm not sure if this will cause issues later, but works well so far. _________________ µgRD dev
Wiki writer |
|
Back to top |
|
|
pingtoo Veteran
Joined: 10 Sep 2021 Posts: 1345 Location: Richmond Hill, Canada
|
Posted: Fri Dec 27, 2024 9:36 pm Post subject: Re: genTree |
|
|
pjp wrote: | zen_desu wrote: | It's sorta like catalyst, but runs entirely unprivileged in a user namespace. I may add it to GURU soon, and would appreciate feedback. | Interesting! catalyst requiring root and a lack of documentation is why I don't use it.
Based on other comments here, is genTree suitable to manage chroots? More specifically the chroots are used to maintain binary packages for other systems. I've also been meaning to make an install / rescue iso image.
Since you mentioned containers as your primary goal, I'm curious how flexible it is for non-container use. |
I will share my way of handing the pkgdir. I use catalyst. And I use container (docker)
I create a pkgdir (binpkgs) whenever start a new setup. I sometime copy existing binpkgs into it if I consider they have similar characteristic (or it jsut blank). having binpkgs is to support restart build, so no need to start from scratch. BTW, the pkgdir is located on a NFS share. so it can be used from anywhere.
the pkgdir was first map (docker -v pkgdir:/binpkgs) then modify .spec file's pkgcache_path to /binpkgs, catalyst then will produce binary packages into /binpkgs.
So I think chroot (or gentree) should be able to something similar.
I use container because I want a easy way to clean up and I use ZRAM/NBD/LVM to form /var/tmp to reduce I/O to my SD card. |
|
Back to top |
|
|
zen_desu Tux's lil' helper
Joined: 25 Oct 2024 Posts: 80
|
Posted: Fri Dec 27, 2024 10:09 pm Post subject: Re: genTree |
|
|
pingtoo wrote: | pjp wrote: | zen_desu wrote: | It's sorta like catalyst, but runs entirely unprivileged in a user namespace. I may add it to GURU soon, and would appreciate feedback. | Interesting! catalyst requiring root and a lack of documentation is why I don't use it.
Based on other comments here, is genTree suitable to manage chroots? More specifically the chroots are used to maintain binary packages for other systems. I've also been meaning to make an install / rescue iso image.
Since you mentioned containers as your primary goal, I'm curious how flexible it is for non-container use. |
I will share my way of handing the pkgdir. I use catalyst. And I use container (docker)
I create a pkgdir (binpkgs) whenever start a new setup. I sometime copy existing binpkgs into it if I consider they have similar characteristic (or it jsut blank). having binpkgs is to support restart build, so no need to start from scratch. BTW, the pkgdir is located on a NFS share. so it can be used from anywhere.
the pkgdir was first map (docker -v pkgdir:/binpkgs) then modify .spec file's pkgcache_path to /binpkgs, catalyst then will produce binary packages into /binpkgs.
So I think chroot (or gentree) should be able to something similar.
I use container because I want a easy way to clean up and I use ZRAM/NBD/LVM to form /var/tmp to reduce I/O to my SD card. |
You could do something similar just by setting the _pkgdir:
Code: | @property
def pkgdir(self):
if self._pkgdir:
return self._pkgdir.expanduser().resolve()
else:
return self.on_conf_root("pkgdir")
|
Basically, that property gets bind mounted into the container, and can be set differently for build (at the top level).
I feel like if you're reusing components, it may be better to do it using layer tarballs. I think of them like building blocks to a system more than a package set. In theory you should be able to just manually extract some of the layers it makes to an empty dir and accomplish the same thing (minus whiteouts).
I'm considering adding toggles to mount certain parts in a tmpfs, possibly even the upper layer for the seed mount. I could probably just make a fstab like config option for the seed mount. _________________ µgRD dev
Wiki writer |
|
Back to top |
|
|
pjp Administrator
Joined: 16 Apr 2002 Posts: 20533
|
Posted: Sat Dec 28, 2024 4:22 am Post subject: Re: genTree |
|
|
zen_desu wrote: | The way I see it, making a "full" container image (as this does) is comparable to a full "system". The only bits that are missing, if you wanted it to be bootable, are a filesystem and boot related things. | I think you mean a file system for boot files? Or is it that what genTree does is only in memory?
I'm considering if I want to make a separate sort of system for managing boot related things.
Yeah, I just didn't trust me to never do the wrong thing as root when I had to trial-and-error to get results. It's been years since I've tried using it, and I uninstalled it when it moved to ~ version 4.
zen_desu wrote: | I'm not sure what you mean by managing chroots, it kinda does that with the "seed" usage, as in it makes an overlayfs over the seed, and chroots into that, and currently cleans the upper layer unless you tell it not to. I think the next thing I'm going to add is the ability to run a "seed update" sequence, which may just be a "layer" target which emerges that stuff to the host system. I may restrict the "cleaner" usage so it doesn't do something like nuke your mounted /config or /build dirs. | I currently have a chroot directory that I chroot into, update the chroot system to build binaries which then are distributed to a binary only client. The chroot is permanent. Since you mentioned containers, I wasn't sure how comparable the two solutions are.
zen_desu wrote: | You can set a different "pkgdir" in the config, or use a "config overlay" where that is defined in the make.conf. Currently, it keeps all packages in one dir unless you set an override. I think each "seed" could have its own package dir if the mounting were toggled here: | I'll probably have to install it and try it out. My goal with my chroots is to have a base, non-gui chroot used to build common binaries. For headless systems primarily. Then, for example, the chroot for my laptop would try to use the binaries in base, and if not available, only then compile for the laptop chroot.
I haven't figured out how to arrange the file systems to make that work I haven't thought about it much either, but that's the idea which I'm hoping can minimize overall maintenance if I ever achieve it.
zen_desu wrote: | I could possibly just have it mount pkgdirs like config overlays, where you do it by name and it has dirs with that name under the config root. I have it using one pkgdir by default to hopefully reduce rebuilds. I'm not sure if this will cause issues later, but works well so far. | That could be interesting for something like binpkgs-common, binpkgs-gui. I currently distribute binaries via a webserver, and I presume something like that could be layered, but I haven't tried to do that yet. _________________ Quis separabit? Quo animo? |
|
Back to top |
|
|
zen_desu Tux's lil' helper
Joined: 25 Oct 2024 Posts: 80
|
Posted: Sat Dec 28, 2024 4:34 am Post subject: Re: genTree |
|
|
pjp wrote: | zen_desu wrote: | The way I see it, making a "full" container image (as this does) is comparable to a full "system". The only bits that are missing, if you wanted it to be bootable, are a filesystem and boot related things. | I think you mean a file system for boot files? Or is it that what genTree does is only in memory?
|
Yes, I'd need to make some fs for the boot files, it could probably be a fat32 sized to fit the kernel, initramfs, etc and a squashfs image of the rootfs. That could be a separate system, where "genTree" just makes the rootfs portion, staying simple,
pjp wrote: |
zen_desu wrote: | I'm not sure what you mean by managing chroots, it kinda does that with the "seed" usage, as in it makes an overlayfs over the seed, and chroots into that, and currently cleans the upper layer unless you tell it not to. I think the next thing I'm going to add is the ability to run a "seed update" sequence, which may just be a "layer" target which emerges that stuff to the host system. I may restrict the "cleaner" usage so it doesn't do something like nuke your mounted /config or /build dirs. | I currently have a chroot directory that I chroot into, update the chroot system to build binaries which then are distributed to a binary only client. The chroot is permanent. Since you mentioned containers, I wasn't sure how comparable the two solutions are.
|
This could be done simply using a "container", or at least a user namespaec with mount privs, using:
Code: | unshare --mount --map-auto --map-root arch-chroot /mountpoint |
or similar.
I mostly use user namespaces so I can make mounts which disappear and with no risk of harming the host rootfs.
pjp wrote: |
zen_desu wrote: | You can set a different "pkgdir" in the config, or use a "config overlay" where that is defined in the make.conf. Currently, it keeps all packages in one dir unless you set an override. I think each "seed" could have its own package dir if the mounting were toggled here: | I'll probably have to install it and try it out. My goal with my chroots is to have a base, non-gui chroot used to build common binaries. For headless systems primarily. Then, for example, the chroot for my laptop would try to use the binaries in base, and if not available, only then compile for the laptop chroot.
I haven't figured out how to arrange the file systems to make that work I haven't thought about it much either, but that's the idea which I'm hoping can minimize overall maintenance if I ever achieve it.
zen_desu wrote: | I could possibly just have it mount pkgdirs like config overlays, where you do it by name and it has dirs with that name under the config root. I have it using one pkgdir by default to hopefully reduce rebuilds. I'm not sure if this will cause issues later, but works well so far. | That could be interesting for something like binpkgs-common, binpkgs-gui. I currently distribute binaries via a webserver, and I presume something like that could be layered, but I haven't tried to do that yet. |
I think a different "seed" could be used for each package category as genTree currently is, but the pkgdir can be manually set per top-level config, so it would use/add to whatever pkgdir you set for that run. _________________ µgRD dev
Wiki writer |
|
Back to top |
|
|
pjp Administrator
Joined: 16 Apr 2002 Posts: 20533
|
Posted: Sat Dec 28, 2024 4:35 am Post subject: Re: genTree |
|
|
pingtoo wrote: | I create a pkgdir (binpkgs) whenever start a new setup. I sometime copy existing binpkgs into it if I consider they have similar characteristic (or it jsut blank). having binpkgs is to support restart build, so no need to start from scratch. BTW, the pkgdir is located on a NFS share. so it can be used from anywhere. | Sharing the binaries is the tricky part, at least to me. As mentioned above, I'd like to share binaries to whatever extend possible across 2 or maybe more differing configurations. headless server in binpkgs-common and gui desktop in binpkgs-gui (or whatever). Using an overlay fs was the only way I could think of, but I haven't tried. Maybe configuring multiple binrepos.conf entries would work too, then prioritize them.
Related would also be using the Gentoo binhost, but only having to download those once.
pingtoo wrote: | I use container because I want a easy way to clean up and I use ZRAM/NBD/LVM to form /var/tmp to reduce I/O to my SD card. | My chroots aren't temporary, so the cleanup isn't particularly relevant (other than using unshare). An additional benefit would be reducing the manual process of setting up a new chroots.
My current "solution" relies on incomplete shell scripts. Once in a while I improve them a bit, but they only work with one chroot, so I've got work to do there if I ever get around to it. _________________ Quis separabit? Quo animo? |
|
Back to top |
|
|
zen_desu Tux's lil' helper
Joined: 25 Oct 2024 Posts: 80
|
Posted: Sat Dec 28, 2024 4:39 am Post subject: Re: genTree |
|
|
pjp wrote: | pingtoo wrote: | I create a pkgdir (binpkgs) whenever start a new setup. I sometime copy existing binpkgs into it if I consider they have similar characteristic (or it jsut blank). having binpkgs is to support restart build, so no need to start from scratch. BTW, the pkgdir is located on a NFS share. so it can be used from anywhere. | Sharing the binaries is the tricky part, at least to me. As mentioned above, I'd like to share binaries to whatever extend possible across 2 or maybe more differing configurations. headless server in binpkgs-common and gui desktop in binpkgs-gui (or whatever). Using an overlay fs was the only way I could think of, but I haven't tried. Maybe configuring multiple binrepos.conf entries would work too, then prioritize them.
Related would also be using the Gentoo binhost, but only having to download those once.
pingtoo wrote: | I use container because I want a easy way to clean up and I use ZRAM/NBD/LVM to form /var/tmp to reduce I/O to my SD card. | My chroots aren't temporary, so the cleanup isn't particularly relevant (other than using unshare). An additional benefit would be reducing the manual process of setting up a new chroots.
My current "solution" relies on incomplete shell scripts. Once in a while I improve them a bit, but they only work with one chroot, so I've got work to do there if I ever get around to it. |
I think the easiest way to share binpkgs is to make a webserver that serves the pkgdir. It takes next to no time to set one up once you know the config style, and it's the eaiest to point clients to IMO. It also makes privileges easy as it requires no privs to run a webserver, the webserver doesn't need to write to the files, and the user should not be able to write through the webserver. I tried NFS but had some issues with perms, because portage expects the mounted pkgdir to be something it could write to. _________________ µgRD dev
Wiki writer |
|
Back to top |
|
|
pjp Administrator
Joined: 16 Apr 2002 Posts: 20533
|
Posted: Sat Dec 28, 2024 4:47 am Post subject: |
|
|
I currently use a web server, but layering the different binpkg directories was the issue... it didn't even occur to be that could be solvable with binrepos.conf. So hopefully that part is "solved" pending implementation. _________________ Quis separabit? Quo animo? |
|
Back to top |
|
|
zen_desu Tux's lil' helper
Joined: 25 Oct 2024 Posts: 80
|
Posted: Sat Dec 28, 2024 4:55 am Post subject: |
|
|
pjp wrote: | I currently use a web server, but layering the different binpkg directories was the issue... it didn't even occur to be that could be solvable with binrepos.conf. So hopefully that part is "solved" pending implementation. |
Yeah, I think that should already work, I believe I did a fair bit of testing where things used my repos first, with the "official" repos as a backup.
I had a setup where a main nginx reverse proxy would serve distfiles from either subdomains or subdirs on my domain. I never used multiple with one host, but I don't see why you could stack these with different priorities. _________________ µgRD dev
Wiki writer |
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|