View previous topic :: View next topic |
Author |
Message |
Maitreya Guru
Joined: 11 Jan 2006 Posts: 445
|
Posted: Sat Nov 02, 2024 8:52 am Post subject: Compiling python 3.13 |
|
|
Code: |
Total duration: 20 min 12 sec
Total tests: run=37,097 failures=17 skipped=1,374
Total test files: run=425/434 failed=10 skipped=32 resource_denied=9
Result: FAILURE
|
So I think I've managed to reduce the scope of the build failures.
1. Went back to native "bare" cflags.
2. Tried to compile on hardware machine with same setup that did work succesful
3. Removed tmpfs from equation
4. Removed pgo from equation
Am starting to suspect the container environment being the culprit.
Will do some test with compiler versions, but so far python 3.13 seems to have issues for my portage setup in a container. On a system with over 2000 packages that did not have this problem before.
Spent more than a day now to figure out why python 3.13 does not want to build in a container environment, but does want to do so outside with the _exact_ same useflag, toolchain etc.
It is currently blocking build chain, but I guess the short answer is to hard block >=3.13 for the moment, until it's fixed. |
|
Back to top |
|
|
Hu Administrator
Joined: 06 Mar 2007 Posts: 22593
|
Posted: Sat Nov 02, 2024 2:18 pm Post subject: |
|
|
Which tests fail? Are they the same ones that fail with USE=pgo? If I recall correctly, the USE=pgo failures were because Python was trying to write and then execute in /tmp (presumably, ignoring $TMPDIR being set to some other directory), and when /tmp is noexec, that failed. I wonder if your container environment has set noexec on some other directories that are normally not mounted noexec.
Can you pastebin the full build.log of a failed build? |
|
Back to top |
|
|
grknight Retired Dev
Joined: 20 Feb 2015 Posts: 1899
|
Posted: Sat Nov 02, 2024 2:24 pm Post subject: |
|
|
It is not recommended to enable test in FEATURES unless you are managing the package. For simple daily use, including test in FEATURES will cause many packages to fail. |
|
Back to top |
|
|
Hu Administrator
Joined: 06 Mar 2007 Posts: 22593
|
Posted: Sat Nov 02, 2024 2:33 pm Post subject: |
|
|
I recently tried to build dev-lang/python:3.13 with USE=pgo and without FEATURES=test. It failed, with errors relating to trying to execute things in /tmp. That build apparently runs at least some tests without being asked to do so. I subsequently discovered dev-lang/python-3.13.0[pgo] build fails with /tmp mounted noexec, which matched my situation. I did not particularly need USE=pgo, so I disabled it and then python:3.13 installed successfully for me. |
|
Back to top |
|
|
sdauth l33t
Joined: 19 Sep 2018 Posts: 644 Location: Ásgarðr
|
Posted: Sat Nov 02, 2024 5:15 pm Post subject: |
|
|
Hu wrote: | That build apparently runs at least some tests without being asked to do so. |
Same here, I don't have test enabled but it still runs tests anyway.
For now I masked it. |
|
Back to top |
|
|
logrusx Advocate
Joined: 22 Feb 2018 Posts: 2380
|
Posted: Sat Nov 02, 2024 5:54 pm Post subject: |
|
|
Hu wrote: | That build apparently runs at least some tests without being asked to do so. |
PGO apparently stands for PROFILE guided optimization. It needs a profile and I'm not sure but if it's not supplied by upstream, which doesn't make much sense then it has to be created somehow. Maybe this explains why it tries to execute something. That profile must come from somewhere.
Best Regards,
Georgi |
|
Back to top |
|
|
Hu Administrator
Joined: 06 Mar 2007 Posts: 22593
|
Posted: Sat Nov 02, 2024 6:11 pm Post subject: |
|
|
Yes, Profile Guided Optimization needs a profile, and usually that is collected locally, so that it is consistent with the compiler version and flags used. Python may have been trying to use its own test suite to obtain the profile results, but it printed output as if it were running the tests to check their correctness, not just for the side effect of obtaining PGO training data. Although a test suite is better than nothing for training data, I am a bit dubious whether the test suite is good training data, since test suites typically try to cover all the unlikely paths to ensure those paths are executed correctly. Good PGO training data mimics what users will run daily. Most users will not be routinely running the unlikely error handling paths, so I worry that a PGO build trained on test suites will expect the error paths to be far more common than they are in daily use. |
|
Back to top |
|
|
logrusx Advocate
Joined: 22 Feb 2018 Posts: 2380
|
Posted: Sat Nov 02, 2024 6:29 pm Post subject: |
|
|
Hu wrote: | since test suites typically try to cover all the unlikely paths |
That's not at all everything it does. It should do a lot. With an interpreter, a lot means really a lot.
But let's not assume a profile is built that way and wait for input from a dev. They sure know.
Anyway, I don't find any value in PGO.
Best Regards,
Georgi |
|
Back to top |
|
|
logrusx Advocate
Joined: 22 Feb 2018 Posts: 2380
|
Posted: Sat Nov 02, 2024 6:44 pm Post subject: |
|
|
I think this is intentional:
Code: | if use pgo; then
local profile_task_flags=(
-m test
"-j$(makeopts_jobs)"
--pgo-extended
--verbose3
-u-network
# We use a timeout because of how often we've had hang issues
# here. It also matches the default upstream PROFILE_TASK.
--timeout 1200
"${COMMON_TEST_SKIPS[@]}"
-x test_dtrace
# All of these seem to occasionally hang for PGO inconsistently
# They'll even hang here but be fine in src_test sometimes.
# bug #828535 (and related: bug #788022)
-x test_asyncio
-x test_httpservers
-x test_logging
-x test_multiprocessing_fork
-x test_socket
-x test_xmlrpc
# Hangs (actually runs indefinitely executing itself w/ many cpython builds)
# bug #900429
-x test_tools
# Fails in profiling run, passes in src_test().
-x test_capi
-x test_external_inspection
)
|
Best Regards,
Georgi |
|
Back to top |
|
|
|