From the documentation for GNU make: http://www.gnu.org/software/make/manual/make.html#Parallel
When the system is heavily loaded, you will probably want to run fewer jobs than when it is lightly loaded. You can use the ‘-l’ option to tell make to limit the number of jobs to run at once, based on the load average. The ‘-l’ or ‘--max-load’ option is followed by a floating-point number. For example,
-l 2.5
will not let make start more than one job if the load average is above 2.5. The ‘-l’ option with no following number removes the load limit, if one was given with a previous ‘-l’ option.
More precisely, when make goes to start up a job, and it already has at least one job running, it checks the current load average; if it is not lower than the limit given with ‘-l’, make waits until the load average goes below that limit, or until all the other jobs finish.
From the Linux man page for uptime: http://www.unix.com/man-page/Linux/1/uptime/
System load averages is the average number of processes that are either in a runnable or uninterruptable state. A process in a runnable state is either using the CPU or waiting to use the CPU. A process in uninterruptable state is waiting for some I/O access, eg waiting for disk. The averages are taken over the three time intervals. Load averages are not normalized for the number of CPUs in a system, so a load average of 1 means a single CPU system is loaded all the time while on a 4 CPU system it means it was idle 75% of the time.
I have a parallel makefile and I want to do the obvious thing: have make to keep adding processes until I am getting full CPU usage but I'm not inducing thrashing.
Many (all?) machines today are multicore, so that means that the load average is not the number make should be checking, as that number needs to be adjusted for the number of cores.
Does this mean that the --max-load
(aka -l
) flag to GNU make is now useless? What are people doing who are running parallel makefiles on multicore machines?
The "Need to Look into it" Rule of Thumb: 0.70 If your load average is staying above > 0.70, it's time to investigate before things get worse. The "Fix this now" Rule of Thumb: 1.00. If your load average stays above 1.00, find the problem and fix it now.
Between 0.00 and 1.0, there is no need to worry. Your servers are safe! 1.5 means the queue is filling up. If the average gets any higher, things are going to start slowing down.
CPU load is the number of processes that are using, or want to use, CPU time, or queued up processes ready to use CPU. This can also be referred to as the run queue length. Let's say for example you have 1 CPU with 1 core.
GNU Make is a tool which controls the generation of executables and other non-source files of a program from the program's source files. Make gets its knowledge of how to build your program from a file called the makefile, which lists each of the non-source files and how to compute it from other files.
My short answer: --max-load
is useful if you're willing to invest the time it takes to make good use of it. With its current implementation there's no simple formula to pick good values, or a pre-fab tool for discovering them.
The build I maintain is fairly large. Before I started maintaining it the build was 6 hours. With -j64
on a ramdisk, now it finishes in 5 minutes (30 on an NFS mount with -j12
). My goal here was to find reasonable caps for -j
and -l
that allows our developers to build quickly but doesn't make the server (build server or NFS server) unusable for everyone else.
To begin with:
-jN
value (on your machine) and find a reasonable upper bound for load average (on your machine), they work nicely together to keep things balanced.-jN
value (or unspecified; eg, -j
without a number) and limit the load average, gmake will:
On Linux at least (and probably other *nix variants), load average is an exponential moving average (UNIX Load Average Reweighed, Neil J. Gunther) that represents the avg number of processes waiting for CPU time (can be caused by too many processes, waiting for IO, page faults, etc). Since it's an exponential moving average, it's weighted such that newer samples have a stronger influence on the current value than older samples.
If you can identify a good "sweet spot" for the right max load and number of parallel jobs (through a combination of educated guesses and empirical testing), assuming you have a long running build: your 1 min avg will hit an equilibrium point (won't fluctuate much). However, if your -jN
number is too high for a given max load average, it'll fluctuate quite a bit.
Finding that sweet spot is essentially equivalent to finding optimal parameters to a differential equation. Since it will be subject to initial conditions, the focus is on finding parameters that get the system to stay at equilibrium as opposed to coming up with a "target" load average. By "at equilibrium" I mean: 1m load avg doesn't fluctuate much.
Assuming you're not bottlenecked by limitations in gmake: When you've found a -jN
-lM
combination that gives a minimum build time: that combination will be pushing your machine to its limits. If the machine needs to be used for other purposes ...
... you may want to scale it back a bit when you're finished optimizing.
Without regard to load avg, the improvements I saw in build time with increasing -jN
appeared to be [roughly] logarithmic. That is to say, I saw a larger difference between -j8
and -j12
than between -j12
and -j16
.
Things peaked for me somewhere between -j48
and -j64
(on the Solaris machine it was about -j56
) because the initial gmake process is single-threaded; at some point that thread cannot start new jobs faster than they finish.
My tests were performed on:
-j64
$(shell ...)
macros are used in recipes; those are ran during the 1st parsing pass and cached:=
to avoid recursive expansionIf you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With