[prev in list] [next in list] [prev in thread] [next in thread] 

List:       gentoo-user
Subject:    Re: [gentoo-user] long compiles
From:       Michael <confabulate () kintzios ! com>
Date:       2023-09-13 15:14:09
Message-ID: 3442789.QJadu78ljV () rogueboard
[Download RAW message or body]


On Wednesday, 13 September 2023 13:41:00 BST Peter Humphrey wrote:
> On Wednesday, 13 September 2023 12:50:20 BST Wols Lists wrote:
> > On 13/09/2023 12:28, Peter Humphrey wrote:
> > > A thought on compiling, which I hope some devs will read: I was tempted
> > > to
> > > push the system hard at first, with load average and jobs as high as I
> > > thought I could set them. I've come to believe, though, that job control
> > > by portage and /usr/bin/make is weak at very high loads, because I would
> > > usually find that a few packages had failed to compile; also that some
> > > complex programs were sometimes unstable. Therefore I've had to throttle
> > > the system to be sure(r) of correctness. Seems a waste.
> > 
> > Bear in mind a lot of systems are thermally limited and can't run at
> > full pelt anyway ...
> 
> No doubt, but apparently not this box: I run it 24x7 with all 24 CPU threads
> fully loaded with floating-point calculations, which make a good deal more
> heat than 'mere' compiling with (I assume) integer arithmetic.   :)

I recall this being discussed in a previous thread, but if your CPU has 24 
threads and you've set:

EMERGE_DEFAULT_OPTS="--jobs=4 --load-average=32
MAKEOPTS="-j14"

You will be asking emerge to run up to 4 x 14 = 56 threads, which could 
potentially eat up to 2G of RAM each, i.e. 112G of RAM.  This will exhaust 
your 64G of RAM, not taking into account whatever else the OS will be trying 
to run at the time. The --load-average=32 is normally expected to be a 
floating point number indicating a percentage of load x the number of cores; 
e.g. for 12 cores you could set it at 12 x 0.9 = 10.8 to limit the load to 90% 
so as maintain some system responsiveness.

Of course, not all emerges use make and you may never or rarely emerge 4 x 
monster packages in parallel to need 2G of RAM for each thread at the same 
time.

If only we had at our disposal some AI algorithm to calculate dynamically each 
time we run emerge the optimal combo of parallel emerge jobs and number of 
make tasks, so as to achieve the highest total time saving Vs energy spent!  
Or just the highest total time saving.  ;-)

I haven't performed any meaningful comparisons to determine where the greatest 
gains are to be achieved.  Parallel emerges of many small packages, or a large 
number of make jobs for big packages.  The combination would change each time 
according to the individual packages waiting for an update.  In my use case, 
instinctively feels more beneficial reducing the time I have to wait for huge 
packages like qtwebengine to finish, than accelerating the updates of half a 
dozen smaller packages.  Therefore, as a rule I leave EMERGE_DEFAULT_OPTS 
unset.  I set MAKEOPTS jobs to the number of CPU threads +1 and the load 
average at 95%, so I can continue using the PC without any noticeable latency.  
On PCs where RAM is constrained I reduce the MAKEOPTS in /etc/portage/
package.env for any large packages which are bound to start swapping and 
thrashing the disk.

["signature.asc" (application/pgp-signature)]

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic