Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Problem with installation - llvm compilation error
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Installing Gentoo
View previous topic :: View next topic  
Author Message
whatiswhat
n00b
n00b


Joined: 30 Sep 2020
Posts: 3

PostPosted: Wed Sep 30, 2020 8:03 pm    Post subject: Problem with installation - llvm compilation error Reply with quote

Hello good people,

Here is the last screen of error:
Code:
*   environment, line 2491:  Called multilib_src_compile
 *   environment, line 2979:  Called cmake_src_compile
 *   environment, line 1254:  Called cmake_build
 *   environment, line 1223:  Called eninja
 *   environment, line 1675:  Called die
 * The specific snippet of code:
 *       "$@" || die "${nonfatal_args[@]}" "${*} failed"
 *
 * If you need support, post the output of `emerge --info '=sys-devel/llvm-10.0.1::gentoo'`,
 * the complete build log and the output of `emerge -pqv '=sys-devel/llvm-10.0.1::gentoo'`.
 * The complete build log is located at '/var/tmp/portage/sys-devel/llvm-10.0.1/temp/build.log'.
 * The ebuild environment file is located at '/var/tmp/portage/sys-devel/llvm-10.0.1/temp/environment'.
 * Working directory: '/var/tmp/portage/sys-devel/llvm-10.0.1/work/llvm-10.0.1_build-abi_x86_64.amd64'
 * S: '/var/tmp/portage/sys-devel/llvm-10.0.1/work/llvm'

>>> Failed to emerge sys-devel/llvm-10.0.1, Log file:

>>>  '/var/tmp/portage/sys-devel/llvm-10.0.1/temp/build.log'

 * Messages for package sys-devel/llvm-10.0.1:

 * ERROR: sys-devel/llvm-10.0.1::gentoo failed (compile phase):
 *   ninja -v -j5 -l0 failed
 *
 * Call stack:
 *     ebuild.sh, line  125:  Called src_compile
 *   environment, line 3843:  Called multilib-minimal_src_compile
 *   environment, line 2497:  Called multilib_foreach_abi 'multilib-minimal_abi_src_compile'
 *   environment, line 2767:  Called multibuild_foreach_variant '_multilib_multibuild_wrapper' 'multilib-minimal_abi_src_compile'
 *   environment, line 2432:  Called _multibuild_run '_multilib_multibuild_wrapper' 'multilib-minimal_abi_src_compile'
 *   environment, line 2430:  Called _multilib_multibuild_wrapper 'multilib-minimal_abi_src_compile'
 *   environment, line  553:  Called multilib-minimal_abi_src_compile
 *   environment, line 2491:  Called multilib_src_compile
 *   environment, line 2979:  Called cmake_src_compile
 *   environment, line 1254:  Called cmake_build
 *   environment, line 1223:  Called eninja
 *   environment, line 1675:  Called die
 * The specific snippet of code:
 *       "$@" || die "${nonfatal_args[@]}" "${*} failed"
 *
 * If you need support, post the output of `emerge --info '=sys-devel/llvm-10.0.1::gentoo'`,
 * the complete build log and the output of `emerge -pqv '=sys-devel/llvm-10.0.1::gentoo'`.
 * The complete build log is located at '/var/tmp/portage/sys-devel/llvm-10.0.1/temp/build.log'.
 * The ebuild environment file is located at '/var/tmp/portage/sys-devel/llvm-10.0.1/temp/environment'.
 * Working directory: '/var/tmp/portage/sys-devel/llvm-10.0.1/work/llvm-10.0.1_build-abi_x86_64.amd64'
 * S: '/var/tmp/portage/sys-devel/llvm-10.0.1/work/llvm'


Here is the emerge --info:
https://dpaste.com/GMPWMB8UW

The emerge -pqv:
https://dpaste.com/B5JPNDMED

I came back to gentoo after a few years of non-use absency and was not expecting to have such installation problems.
Please help!
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54300
Location: 56N 3W

PostPosted: Wed Sep 30, 2020 8:12 pm    Post subject: Reply with quote

whatiswhat,

Welcome back, we know you would come, just not when. :)

Please put the build log /var/tmp/portage/sys-devel/llvm-10.0.1/temp/build.log onte a pastebin site.
With 4G RAM and MAKEOPTS="-j5" you are goint to run out of RAM or some of the larger C++ packages.
They can take 2G RAM per make thread, so -j5 may want 10G RAM for C++

The build log and dmesg may show that the Out Of Memory (OOM) manager has killed one of your make threads.

The fix is to reduce MAKEOPTS="-j5" for bigger C++ jobs.
Portage can do that for you, if it is the cause of your build failure.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
x90e
n00b
n00b


Joined: 30 Sep 2020
Posts: 40

PostPosted: Wed Sep 30, 2020 8:20 pm    Post subject: Reply with quote

Also, assuming you have portage compiling under a tmpfs RAMDISK, I would reboot and try again.
Back to top
View user's profile Send private message
whatiswhat
n00b
n00b


Joined: 30 Sep 2020
Posts: 3

PostPosted: Wed Sep 30, 2020 10:17 pm    Post subject: Reply with quote

NeddySeagoon wrote:

Welcome back, we know you would come, just not when. :)


I appreciate openrc and the gentoo community too much, to not come back to you :)

NeddySeagoon wrote:

Please put the build log /var/tmp/portage/sys-devel/llvm-10.0.1/temp/build.log onte a pastebin site.


Maybe it is not a pastebin, which in the basic version takes only 0.5 mb (PRO accounts are currently sold out), but here you are:

https://paste.ubuntu.com/p/tSWvxrkJdm/ - build.log

NeddySeagoon wrote:

With 4G RAM and MAKEOPTS="-j5" you are goint to run out of RAM or some of the larger C++ packages.
They can take 2G RAM per make thread, so -j5 may want 10G RAM for C++


OK. So i am reducing the threads to -j2 and let see what happens...
Back to top
View user's profile Send private message
x90e
n00b
n00b


Joined: 30 Sep 2020
Posts: 40

PostPosted: Wed Sep 30, 2020 10:45 pm    Post subject: Reply with quote

Are you using a tmpfs mount for portage @ /var/tmp ? you might just need to make it bigger.
Back to top
View user's profile Send private message
Ionen
Developer
Developer


Joined: 06 Dec 2018
Posts: 2727

PostPosted: Wed Sep 30, 2020 10:51 pm    Post subject: Reply with quote

x90e wrote:
Are you using a tmpfs mount for portage @ /var/tmp ? you might just need to make it bigger.
Would be getting out of space errors then and not the usual "fatal error: Killed signal terminated program cc1plus" (seen in the now-posted build.log) almost always pointing to the OOM killer (Neddy made the right assumption as to what went wrong).

Using tmpfs for building with only 4GB ram would also be rather questionable.


Last edited by Ionen on Wed Sep 30, 2020 10:58 pm; edited 1 time in total
Back to top
View user's profile Send private message
x90e
n00b
n00b


Joined: 30 Sep 2020
Posts: 40

PostPosted: Wed Sep 30, 2020 10:57 pm    Post subject: Reply with quote

yeah that's what you're getting? the out of memory killer kicks in when you don't have enough memory. you can set the tmpfs to be 4gb. I've got a dell laptop with 8GB of ram and i've had to set the tmpfs to 10GB for compiling chrome/rust/llvm/ etc. I was getting the same messages you are, processes just randomly being killed.
Back to top
View user's profile Send private message
Ionen
Developer
Developer


Joined: 06 Dec 2018
Posts: 2727

PostPosted: Wed Sep 30, 2020 11:09 pm    Post subject: Reply with quote

x90e wrote:
yeah that's what you're getting? the out of memory killer kicks in when you don't have enough memory. you can set the tmpfs to be 4gb. I've got a dell laptop with 8GB of ram and i've had to set the tmpfs to 10GB for compiling chrome/rust/llvm/ etc. I was getting the same messages you are, processes just randomly being killed.
tmpfs is a filesystem running in memory/swap, and using it will lead to _less_ available ram for building and making your problem worse. Other circumstances may possibly have allowed things to "kinda" work but it wasn't the solution.

If have low ram you'll want to:
- not use tmpfs for building, aka /var/tmp on a physical drive
- set a reasonable -jX value (if want to be safe, especially with larger C++ packages, go with the 2GB per -jX estimate, like -j4 if have 8GB)
- have swap, so going a bit over won't start killing things (but don't overly rely on this as building in swap is slower than reducing -jX, this is mostly to move less-used applications in swap and free some ram)
- don't carelessly use emerge --jobs X

There are other more exotic options like zram, but well. If very low ram (<2GB) should probably consider building binpkgs on another machine if possible.
Edit: on a side-note rustc is worse than C++ and can sometime use ~4GB on its own so even -j1 could need swap
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54300
Location: 56N 3W

PostPosted: Wed Sep 30, 2020 11:29 pm    Post subject: Reply with quote

whatiswhat,

Code:
x86_64-pc-linux-gnu-g++: fatal error: Killed signal terminated program cc1plus
compilation terminated.

That's the OOM manager killing a C++ make thread to keep the kernel alive.

Teach portage to use per package MAKEOPTS
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
x90e
n00b
n00b


Joined: 30 Sep 2020
Posts: 40

PostPosted: Thu Oct 01, 2020 12:06 am    Post subject: Reply with quote

Ionen wrote:
x90e wrote:
yeah that's what you're getting? the out of memory killer kicks in when you don't have enough memory. you can set the tmpfs to be 4gb. I've got a dell laptop with 8GB of ram and i've had to set the tmpfs to 10GB for compiling chrome/rust/llvm/ etc. I was getting the same messages you are, processes just randomly being killed.
tmpfs is a filesystem running in memory/swap, and using it will lead to _less_ available ram for building and making your problem worse. Other circumstances may possibly have allowed things to "kinda" work but it wasn't the solution.

If have low ram you'll want to:
- not use tmpfs for building, aka /var/tmp on a physical drive
- set a reasonable -jX value (if want to be safe, especially with larger C++ packages, go with the 2GB per -jX estimate, like -j4 if have 8GB)
- have swap, so going a bit over won't start killing things (but don't overly rely on this as building in swap is slower than reducing -jX, this is mostly to move less-used applications in swap and free some ram)
- don't carelessly use emerge --jobs X

There are other more exotic options like zram, but well. If very low ram (<2GB) should probably consider building binpkgs on another machine if possible.
Edit: on a side-note rustc is worse than C++ and can sometime use ~4GB on its own so even -j1 could need swap


It would only lead to "less available ram" if the system wasn't using the tmpfs for compiling. .. it literally sets aside ram specifically for building so that you aren't having to write to disk all the time..only once the package is done building and is installed onto root or other partitions.
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54300
Location: 56N 3W

PostPosted: Thu Oct 01, 2020 9:30 am    Post subject: Reply with quote

x90e,

If tmpfs is empty, it takes up no RAM and the RAM can be used for other things.
The size, by default, 50% of RAM is an upper limit.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
x90e
n00b
n00b


Joined: 30 Sep 2020
Posts: 40

PostPosted: Thu Oct 01, 2020 4:55 pm    Post subject: Reply with quote

NeddySeagoon wrote:
x90e,

If tmpfs is empty, it takes up no RAM and the RAM can be used for other things.
The size, by default, 50% of RAM is an upper limit.


The first part, I know, that's my point. It's not making "less avaialble ram for building" because it's using itself in RAM to build.

The second part, It didn't make sense to me but I tried it anyway and I definitely right now am running a laptop with only 8gb of RAM, yet with a fully functional 10GB tmpfs mounted for /var/tmp for portage use. I used to have it at 4gb only but I would run out of space compiling large programs like rust,chromium,etc. So I increased it just on the off chance that it would work and it does. It doesn't say anything about a limit and actually fixed my previous issues of running out of compile space.

*shrug*

Maybe it uses as much RAM as it can and then combines whatever it can't use with free swap space into one contiguous allocatable array? I do have a 6gb swap partition in addition to the 8GB of ram.
Back to top
View user's profile Send private message
Ionen
Developer
Developer


Joined: 06 Dec 2018
Posts: 2727

PostPosted: Thu Oct 01, 2020 5:25 pm    Post subject: Reply with quote

x90e wrote:
It's not making "less avaialble ram for building" because it's using itself in RAM to build.
There's two part to this:
1. disk space usage for the source/objects -- 2. ram used by the compiler processing those (#2 is what I meant by building)

by using tmpfs, you're changing this to:
1. ram usage for the source/objects -- 2. ram used by the compiler processing those

So now you have less ram to do #2 because #1 is using a whole lot of it. If both ram and swap are exhausted then #2 is unable to cope and the OOM killer kills the process, build failure happen with the usual "Killed" message.

It'd fail either way but running out of tmpfs/disk space and running out of ram are different things with different error messages. If building a large project then stick to using disk space for #1, it'll typically compile faster (less slow swap usage, and will allow more build threads to do #2), and save yourself needless trouble.
Back to top
View user's profile Send private message
x90e
n00b
n00b


Joined: 30 Sep 2020
Posts: 40

PostPosted: Thu Oct 01, 2020 6:33 pm    Post subject: Reply with quote

Ionen wrote:
x90e wrote:
It's not making "less avaialble ram for building" because it's using itself in RAM to build.
There's two part to this:
1. disk space usage for the source/objects -- 2. ram used by the compiler processing those (#2 is what I meant by building)

by using tmpfs, you're changing this to:
1. ram usage for the source/objects -- 2. ram used by the compiler processing those

So now you have less ram to do #2 because #1 is using a whole lot of it. If both ram and swap are exhausted then #2 is unable to cope and the OOM killer kills the process, build failure happen with the usual "Killed" message.

It'd fail either way but running out of tmpfs/disk space and running out of ram are different things with different error messages. If building a large project then stick to using disk space for #1, it'll typically compile faster (less slow swap usage, and will allow more build threads to do #2), and save yourself needless trouble.


Yeah I mean, when compiling chromium for example, it checks that you have at least 3GiB RAM and 7GiB "disk space" (which in my case was referring to /var/tmp/portage which I have /var/tmp as my tmpfs) and I know that it would still fail out with no space left on the tmpfs if, say, another package had failed to build and there were remnants leftover in /var/tmp.

I have seen your other posts so I know you're quite knowledgable so i'm sure you're right, but for correcting my own understanding.. when you said stick to using disk space for source/objects, how would that compile faster? I originally set up the tmpfs because i'm using nvme drives w/ 64GB of RAM, so I wanted to minimize the writes to the SSD by using RAM for as much as I can, but is there a better way to do this? I do have a 6GB swap partition on the NVMe drive as well.
Back to top
View user's profile Send private message
Hu
Administrator
Administrator


Joined: 06 Mar 2007
Posts: 21706

PostPosted: Thu Oct 01, 2020 6:51 pm    Post subject: Reply with quote

Ideally, you would keep everything related to emerge and all its subprocesses entirely in RAM: tmpfs for the files, and never swap out any of the pages used for application data. That would give you RAM-speed response times for anything related to the build. In practice, we assume most people don't have 64GB of RAM, and from there assume that, for large projects such as Chromium, the filesystem size of the sources+objects, plus the RAM requirements for the application data (compiler data structures, Make or equivalent dependency tree data, etc.), will exceed available physical RAM, at which point you start swapping. If you swap, then some accesses occur at disk speed, rather than RAM speed. If the kernel makes non-ideal choices for what to swap, then you may incur disk speed accesses rather more often than needed, as the kernel wastes RAM keeping resident things you could have happily swapped out, while shuffling the valuable pages you want to access in and out of main memory. If you swap to a spinning drive, rather than an SSD/NVMe, then swap speed is horrible compared to RAM speed.

The advice to move the files out of tmpfs is based on the idea that this will discourage the kernel from keeping in-memory copies of things that you would prefer to page out, like object files that have already been compiled, which will not be needed again until link time. By encouraging the kernel to page out sources/objects, you hopefully free up enough RAM that you never swap out the pages you most need to keep in-memory: the compiler's code, commonly re-read headers, etc. If your system can handle running the entire build in RAM, without using swap, feel free to do so. It almost certainly will be no worse than writing the data to disk, and will likely be at least a little bit better.
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54300
Location: 56N 3W

PostPosted: Thu Oct 01, 2020 7:01 pm    Post subject: Reply with quote

x90e,

Swap access is slower than file read/writes.
When you force tmpfs to write to swap, its slower than writing the same data to a file.

Ponder this. If you have enough RAM to put /var/tmp/portage into tmpfs, you don't gain any speed by doing so because the kernel will keep everything cached anyway.
You do save a lot of writes that will never be read but they are effectively 'free' as the transfers are done by DMA.
You have to use the CPU to set up the DMA and it it does take a little memory bandwidth for the transfer.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
x90e
n00b
n00b


Joined: 30 Sep 2020
Posts: 40

PostPosted: Thu Oct 01, 2020 7:06 pm    Post subject: Reply with quote

NeddySeagoon wrote:
x90e,

Swap access is slower than file read/writes.
When you force tmpfs to write to swap, its slower than writing the same data to a file.

Ponder this. If you have enough RAM to put /var/tmp/portage into tmpfs, you don't gain any speed by doing so because the kernel will keep everything cached anyway.
You do save a lot of writes that will never be read but they are effectively 'free' as the transfers are done by DMA.
You have to use the CPU to set up the DMA and it it does take a little memory bandwidth for the transfer.


Well, I know that RAM is faster than disk I/O, and thrashing to swap is slow, even on NVMe. That's what led me to initially setup the tmpfs, a workable RAM disk for compiling thinking that it would reduce disk accesses and be much faster. and subjectively I can say that I noticed a great decrease in compile times once I started using the tmpfs.

Can you expand about the "not gain any speed due to kernel caching" ? I understand how caching works, but I didn't think that the kernel kept all the portage source files / objects in cache when emerging something? I thought that it would have to grab the files from network, put them on disk, then read from the disk, compile in memory and then store back to disk? I was hoping that the tmpfs would cut out some / most of the disk writes in this case by the RAM being faster than the nvme / different i/o queues?

I appreciate you guys taking the time to correct me :D
Back to top
View user's profile Send private message
whatiswhat
n00b
n00b


Joined: 30 Sep 2020
Posts: 3

PostPosted: Thu Oct 01, 2020 7:44 pm    Post subject: Reply with quote

NeddySeagoon,

It seems taht you were right because it has took more time but llvm have compiled without any errors.

I would also like to note that information about possible compilation errors caused by small and insufficient amount of RAM needed to handle the -j5 threads should be corrected/added/completed on the wiki page which I was based on to tweak my make.conf:

https://wiki.gentoo.org/wiki/Intel_Core_2

Quote:

FILE /etc/portage/make.confMAKEOPTS (Intel Core 2 Quad)

MAKEOPTS="-j5"


Of course after successfull compilation the other problems have appeared but I will not list them cause it is better to discuss them in a separate thread.
Thank you all for your help.


Last edited by whatiswhat on Thu Oct 01, 2020 7:50 pm; edited 1 time in total
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54300
Location: 56N 3W

PostPosted: Thu Oct 01, 2020 7:50 pm    Post subject: Reply with quote

x90e,

Lets suppose you have a very large RAM. So large, that for a small package, (no numbers) that no writes to HDD are required.
They may happen but they are not required.
You run
Code:
emerge <small_packace>

Portage downloads the files from the web and they are saved (by way of disk buffers) to your distfiles.
gcc gets loaded to build <small_packace> but its still in the disk buffer because there was no reason to evict it.
No disk reads occur.

As the build progresses the same thing happens with all the intermediate files and build products.
Lets say /var/tmp/portage in on HDD. All the intermediate files and build products may get written to disk but there in no pressure on RAM, so nothing is evicted.

The final link happens, it stays in the disk buffer too.

You decide to run <small_packace>, the HDD is not read, its already in the disk buffer.

So you download, build and run a package with no requirement for any disk writes.
Yes they will happen but its writes that will never be read because nothing was ever evicted from the disk cache.

Its not hard to demonstrate. We already agree that tmpfs can be used for /var/tmp/portage.
Mount tmpfs over the top of distfiles, just for a trial.
The hard bit is arranging tmpfs to take the install products. That is left an an exercise for the reader :)

I have a 8 core arm64 system with 128G RAM.
Its been beavering away doing emerge -e @world for about 24 hours now.
Code:
moriarty /usr/Pi_Root/usr/src # free -h
              total        used        free      shared  buff/cache   available
Mem:          125Gi       3.3Gi        87Gi       3.1Gi        34Gi       117Gi
Swap:         4.0Gi          0B       4.0Gi

and it has 34Gi in cache.
Due to the way Linux memory management works, that 34Gi will eventually consume all of RAM, unless something else needs RAM, then the oldest unused will be dropped.

/proc/meminfo
/proc/meminfo:
MemTotal:       131641008 kB
MemFree:        92438940 kB
MemAvailable:   123587564 kB
Buffers:          566112 kB
Cached:         34036624 kB
SwapCached:            0 kB
Active:         16016556 kB
Inactive:       21381604 kB
Active(anon):    4483616 kB
Inactive(anon):  1624832 kB
Active(file):   11532940 kB
Inactive(file): 19756772 kB
Unevictable:        7152 kB
Mlocked:            7152 kB
SwapTotal:       4194300 kB
SwapFree:        4194300 kB
Dirty:                12 kB
Writeback:             0 kB
AnonPages:       2802064 kB
Mapped:            58128 kB
Shmem:           3307524 kB
KReclaimable:    1539296 kB
Slab:            1681460 kB
SReclaimable:    1539296 kB
SUnreclaim:       142164 kB
KernelStack:        2848 kB
PageTables:         7532 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    70014804 kB
Committed_AS:    6198140 kB
VmallocTotal:   135290159040 kB
VmallocUsed:        7812 kB
VmallocChunk:          0 kB
Percpu:             1024 kB
HardwareCorrupted:     0 kB
CmaTotal:          65536 kB
CmaFree:           59160 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
Hugetlb:               0 kB


For the avoidance of doubt, I can do a whole install in under 34Gi.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
x90e
n00b
n00b


Joined: 30 Sep 2020
Posts: 40

PostPosted: Thu Oct 01, 2020 7:56 pm    Post subject: Reply with quote

NeddySeagoon wrote:
x90e,

Lets suppose you have a very large RAM. So large, that for a small package, (no numbers) that no writes to HDD are required.
They may happen but they are not required.
You run
Code:
emerge <small_packace>

Portage downloads the files from the web and they are saved (by way of disk buffers) to your distfiles.
gcc gets loaded to build <small_packace> but its still in the disk buffer because there was no reason to evict it.
No disk reads occur.

As the build progresses the same thing happens with all the intermediate files and build products.
Lets say /var/tmp/portage in on HDD. All the intermediate files and build products may get written to disk but there in no pressure on RAM, so nothing is evicted.

The final link happens, it stays in the disk buffer too.

You decide to run <small_packace>, the HDD is not read, its already in the disk buffer.

So you download, build and run a package with no requirement for any disk writes.
Yes they will happen but its writes that will never be read because nothing was ever evicted from the disk cache.

Its not hard to demonstrate. We already agree that tmpfs can be used for /var/tmp/portage.
Mount tmpfs over the top of distfiles, just for a trial.
The hard bit is arranging tmpfs to take the install products. That is left an an exercise for the reader :)

I have a 8 core arm64 system with 128G RAM.
Its been beavering away doing emerge -e @world for about 24 hours now.
Code:
moriarty /usr/Pi_Root/usr/src # free -h
              total        used        free      shared  buff/cache   available
Mem:          125Gi       3.3Gi        87Gi       3.1Gi        34Gi       117Gi
Swap:         4.0Gi          0B       4.0Gi

and it has 34Gi in cache.
Due to the way Linux memory management works, that 34Gi will eventually consume all of RAM, unless something else needs RAM, then the oldest unused will be dropped.

/proc/meminfo
/proc/meminfo:
MemTotal:       131641008 kB
MemFree:        92438940 kB
MemAvailable:   123587564 kB
Buffers:          566112 kB
Cached:         34036624 kB
SwapCached:            0 kB
Active:         16016556 kB
Inactive:       21381604 kB
Active(anon):    4483616 kB
Inactive(anon):  1624832 kB
Active(file):   11532940 kB
Inactive(file): 19756772 kB
Unevictable:        7152 kB
Mlocked:            7152 kB
SwapTotal:       4194300 kB
SwapFree:        4194300 kB
Dirty:                12 kB
Writeback:             0 kB
AnonPages:       2802064 kB
Mapped:            58128 kB
Shmem:           3307524 kB
KReclaimable:    1539296 kB
Slab:            1681460 kB
SReclaimable:    1539296 kB
SUnreclaim:       142164 kB
KernelStack:        2848 kB
PageTables:         7532 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    70014804 kB
Committed_AS:    6198140 kB
VmallocTotal:   135290159040 kB
VmallocUsed:        7812 kB
VmallocChunk:          0 kB
Percpu:             1024 kB
HardwareCorrupted:     0 kB
CmaTotal:          65536 kB
CmaFree:           59160 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
Hugetlb:               0 kB


For the avoidance of doubt, I can do a whole install in under 34Gi.


I've got 64gb here, I was hoping to minimize as much NVMe SSD unnecessary writes to keep down on disc wear to keep endurance up. I've got discard and noatime options set in fstab and thought that compiling would be probably the heaviest thing I would have to worry about so I thought doing it all in RAM would help but from what you're saying it doesn't sound like it's ever in RAM for very long
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54300
Location: 56N 3W

PostPosted: Thu Oct 01, 2020 9:45 pm    Post subject: Reply with quote

x90e,

Its in RAM until the pressure on RAM means that something must be evicted.
It will be written to disk and remain in RAM too, depending on the commit frequency of the filesystem.
Filesystems have a maximum time dirty buffers are allowed to exist.

When the kernel goes to read something from the HDD and finds it already cached, the read will not happen.

That 34GiB in my buffers will have been committed to the HDD as I use ext4.
Its writes that will never be read until the cache is cleared though.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
Hu
Administrator
Administrator


Joined: 06 Mar 2007
Posts: 21706

PostPosted: Thu Oct 01, 2020 9:48 pm    Post subject: Reply with quote

It stays in RAM until the kernel has a better use for the RAM. In particular, it remains in RAM as a cached disk page even after the data is successfully written to disk. Then, if you try to read back the file you just wrote, the kernel can notice that it still has a copy of that page in memory, can assume the disk would just return that page unchanged if asked, and can then short-circuit the read by using the in-memory page without reloading it from disk. The write to the disk happened in the background, so it may not slow down the writing of the file at all, assuming the written file data fits entirely within the RAM used for disk pages. If you get very lucky, the kernel might defer writing the disk blocks for so long that the file gets deleted before it is ever persisted to disk, at which point the kernel can eliminate the write to disk. (It may still need to update directory control data to reflect that the file vaguely existed for a time, but it can skip writing the file's data.)
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54300
Location: 56N 3W

PostPosted: Thu Oct 01, 2020 10:05 pm    Post subject: Reply with quote

x90e,

discard in fstab may be a very bad thing.

discard is not supposed to be a command to the drive to do anything. Its supposed to keep track of discarded space and only erase it when it needs to.
This implementation minimises write amplification, which is caused when an erase block (to be erased) contains some still in use write blocks.
The still used blocks have to be moved.

Many poor implementations of discard implement it as a command to be acted on immediately.
This causes a great deal of write amplification.

Running fstrim in a cron job is safer.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
x90e
n00b
n00b


Joined: 30 Sep 2020
Posts: 40

PostPosted: Thu Oct 01, 2020 11:14 pm    Post subject: Reply with quote

NeddySeagoon wrote:
x90e,

discard in fstab may be a very bad thing.

discard is not supposed to be a command to the drive to do anything. Its supposed to keep track of discarded space and only erase it when it needs to.
This implementation minimises write amplification, which is caused when an erase block (to be erased) contains some still in use write blocks.
The still used blocks have to be moved.

Many poor implementations of discard implement it as a command to be acted on immediately.
This causes a great deal of write amplification.

Running fstrim in a cron job is safer.


How many implementations of discard are there? I'm just using regular gentoo fstab? Not like a command just as one of the mounting options. The gentoo handbook SSD page suggested it?
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54300
Location: 56N 3W

PostPosted: Fri Oct 02, 2020 10:21 am    Post subject: Reply with quote

x90e,

Nobody knows how many implementations of discard there are.

Regardless, the kernel sends discard information to the drive, thats the same for everyone.
Its what the drive does with the information once it gets it that matters and thats propriatary information.

We do know when it goes wrong. Like there have been drives that incorrectly erased LBA 0.
That's the MSDOS partition table, so there were some unhappy users.

Keep an eye on your lifetime parameters in
Code:
smartctl -a /dev/...

_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Installing Gentoo All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum