In Make this flag exists:
-l [load], --load-average[=load] Specifies that no new jobs (commands) should be started if there are others jobs running and the load average is at least load (a floating-point number). With no argument, removes a previous load limit.
Do you have a good strategy for what value to use for the load limit ? It seems to differ a lot between my machines.
Acceptable load depends on the number of CPU cores. If there is one core, than load average more than 1 is overload. If there are four cores, than load average of more than four is overload.
People often just specify the number of cores using -j switch.
See some empirical numbers here: https://stackoverflow.com/a/17749621/412080
I recommend against using the -l
option.
In principle, -l
seems superior to -j
. -j
says, start this many jobs. -l
says, make sure this many jobs are running. Often, those are almost the same thing, but when you have I/O bound jobs are other oddities, then -l
should be better.
That said, the concept of load average is a bit dubious. It is necessarily a sampling of what goes on on the system. So if you run make -j -l N
(for some N
) and you have a well-written makefile, then make will immediately start a large number of jobs and run out of file descriptors or memory before even the first sample of the system load can be taken. Also, the accounting of the load average differs across operating systems, and some obscure ones don't have it at all.
In practice, you'll be as well off using -j
and will have less headaches. To get more performance out of the build, tune your makefiles, play with compiler options, and use ccache or similar.
(I suspect the original reason for the -l
option stems from a time when multiple processors were rare and I/O was really slow.)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With