View previous topic :: View next topic |
Author |
Message |
eccerr0r Watchman

Joined: 01 Jul 2004 Posts: 9929 Location: almost Mile High in the USA
|
Posted: Tue Mar 11, 2025 7:10 am Post subject: portage --jobs>1 optimizations? |
|
|
I had some ponderances:
Is it possible to force portage to not build two packages at the same time?
Well, this seems like an obvious solution: make them dependencies of each other... but no, didn't mean that. Suppose you have two packages that eat a lot of resources and are not dependencies of each other... and you want to make sure portage doesn't choose those two to build at the same time. Sort of like chromium and qtwebengine at the same time which actually happens more often than not for me. Is there a way to prevent it and serialize the two? Actually putting in a fake dependency outside of the ebuild might do the job... though don't know if this is possible.
Next question: If I try to build cross-*/libc with distcc and my helper machine does not have that cross compiler, and thus will fail, will that distcc scoreboard that as a failure for that machine even if I have a second package building at the same time that uses the base compiler, notice that the machine has been failing, and also not use that machine...?
Just trying to figure out more of these possible situations to streamline distcc. Right now I'm seeing a lot of idling in distcc due to preprocessor bottle neck in first case and possible unnecessary helper kicking in the second... _________________ Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching? |
|
Back to top |
|
 |
NeddySeagoon Administrator


Joined: 05 Jul 2003 Posts: 55011 Location: 56N 3W
|
Posted: Tue Mar 11, 2025 11:26 am Post subject: |
|
|
eccerr0r,
Not as you state what you would like.
You could do --exclude=chromium then build it separately on its own later.
That's safe as nothing depends on chromium.
Lots of things depend on qtwebengine so --exclude=qtwebengine maf not go as well.
--jobs is only evaluated once in any emerge run, so you can't play with it in the per package environment. _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
 |
logrusx Advocate


Joined: 22 Feb 2018 Posts: 2822
|
Posted: Tue Mar 11, 2025 12:23 pm Post subject: Re: portage --jobs>1 optimizations? |
|
|
eccerr0r wrote: | I had some ponderances:
Is it possible to force portage to not build two packages at the same time? |
No. Emerge jobs is not per package because it does not make sense at the package level.
eccerr0r wrote: | Actually putting in a fake dependency outside of the ebuild might do the job... though don't know if this is possible. |
I can't imagine how would you do that with a fake dependency that's external to both packages without modifying them.
If you have a mechanism in mind, you could try to specify it in a bug and see if portage developers would like it.
eccerr0r wrote: | Next question: If I try to build cross-*/libc with distcc and my helper machine does not have that cross compiler, and thus will fail, will that distcc scoreboard that as a failure for that machine even if I have a second package building at the same time that uses the base compiler, notice that the machine has been failing, and also not use that machine...?
Just trying to figure out more of these possible situations to streamline distcc. Right now I'm seeing a lot of idling in distcc due to preprocessor bottle neck in first case and possible unnecessary helper kicking in the second... |
I guess you need to find this out on your own.
Best Regards,
Georgi |
|
Back to top |
|
 |
John R. Graham Administrator


Joined: 08 Mar 2005 Posts: 10749 Location: Somewhere over Atlanta, Georgia
|
Posted: Tue Mar 11, 2025 12:58 pm Post subject: |
|
|
NeddySeagoon wrote: | ... You could do --exclude=chromium then build it separately on its own later.
That's safe as nothing depends on chromium. | For what it's worth, I use --exclude=chromium a lot, both for the reasons eccerr0r has outline and, well, just because chromium is updated way too often.
- John _________________ I can confirm that I have received between 0 and 499 National Security Letters. |
|
Back to top |
|
 |
Genone Retired Dev


Joined: 14 Mar 2003 Posts: 9630 Location: beyond the rim
|
Posted: Tue Mar 11, 2025 1:43 pm Post subject: |
|
|
Well, in general --load-avg is intended to avoid that issue. |
|
Back to top |
|
 |
szatox Advocate

Joined: 27 Aug 2013 Posts: 3548
|
Posted: Tue Mar 11, 2025 2:34 pm Post subject: |
|
|
Quote: | Well, in general --load-avg is intended to avoid that issue. | Yeah, except it doesn't work with "modern build systems", does it? I remember e.g. ninja as being notorious for ignoring any limits suggested by the user.
Too bad, laptops don't even really benefit from parallel execution; when you run more threads, they just clock down, making the whole operation pointless.
Quote: | Next question: If I try to build cross-*/libc with distcc and my helper machine does not have that cross compiler, and thus will fail, will that distcc scoreboard that as a failure for that machine even if I have a second package building at the same time that uses the base compiler, notice that the machine has been failing, and also not use that machine...? | It's been a while, but AFAIR distcc doesn't ban failing workers from accepting jobs. It does ban failed jobs from being distributed instead, and retries them locally.
Don't put different workers into a single pool. _________________ Make Computing Fun Again |
|
Back to top |
|
 |
eccerr0r Watchman

Joined: 01 Jul 2004 Posts: 9929 Location: almost Mile High in the USA
|
Posted: Tue Mar 11, 2025 5:49 pm Post subject: |
|
|
Yeah, the --load-average and -l are killing my runs because --jobs causes the other ebuild to submit more jobs and both sort of livelock each other (where load average is high but cores are idle) unless you have a high -l ... which risks killing the machine with undistributable jobs.
As far as I can tell if a distcc job fails for any reason it gets blacklisted for a few minutes, including missing compilers like not having gcc-pc-linux-mingw installed. However the separate pools issue is kind of a problem too as having the top level portage knowing the status of all workers is good. Already see the issue when running two builds sharing the same /etc/distcc/hosts but with different PORTAGE_TMPDIR which mean they have different DISTCC_DIRs - this can potentially saturate the remote machines with twice the number of jobs you were intending...which is also a concern when I try to emerge --update on two machines at the same time with the same /etc/distcc/hosts pool.
Incidentally, running multiple cores even if they clock down is still beneficial up until thermal limits are reached. It's the thermal throttling that hurts. Even desktops will de-turbo the same way (along with thermal throttling which needs to be avoided.) _________________ Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching? |
|
Back to top |
|
 |
|