Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Make -j RAM limits

Tags:

Is there any way to force make -j to not over consume my RAM? I work on a dev team, and we have different hardware sets, so -j8 may not be optimal for everyone. However, make -j uses too much RAM for me, and spills over into swap, which can take down my entire system. How can I avoid this?

Ideally, I would want make to watch the system load and stop spawning new threads, wait for some to complete, and continue on.

like image 207
Drise Avatar asked Jul 24 '12 21:07

Drise


People also ask

Can RAM be limited?

The maximum random access memory (RAM) installed in any computer system is limited by hardware, software and economic factors. The hardware may have a limited number of address bus bits, limited by the processor package or design of the system.

What limits the amount of RAM?

The speed of your processor and the bus speed of the computer motherboard are the limiting factors on the speed of RAM installed in your computer. RAM upgrades are limited by the capability of the computer and the availability of expansion slots for adding RAM.


2 Answers

The somewhat simple solution would be for each workstation to have an environment variable that is suited to what that hardware can handle. Have the makefile read this environment variable and pass it to the -j option. How to get gnu make to read env variables.

Also, if the build process has many steps and takes a long time have make re-read the environment variable so that during the build you can reduce / increase resource usage.

Also, maybe have a service/application running on the workstation that does the monitoring of resource usage and modify the environment variable instead of trying to have make do it...

like image 101
Chimera Avatar answered Oct 27 '22 21:10

Chimera


Is it possible that there is some confusion what make -j does? (at least I had it wrong for a long time...). I assumed that -j without options will adapt to the number of CPUs, but it doesn't - it simply doesn't apply any limit. This means that for big projects it will create a huge number of processes (just run "top" to see...), and possibly use up all the RAM. I stumbled on this when the "make -j" of my project used all of my 16Gb of RAM and failed, while "make -j 8" topped out at 2.5 Gb RAM usage (on 8 cores, load is close to 100% in both cases).

In terms of efficiency, I think using a limit equal to or bigger than the maximum number of CPUs you expect is more efficient that no limit, as the scheduling of thousands of processes has some overhead. Total number of processes created should be constant, but as "-j" creates a lot of them at once, memory allocation might become a problem. Even setting the limit to twice the number of CPUs should still be more conservative that not setting a limit at all.

PS: After some more digging I came up with a neat solution - just read out the number of processors and use that as the -j option:

make -j `nproc` 
like image 38
sfx Avatar answered Oct 27 '22 21:10

sfx