Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why should I pass make -j an argument? (rather than leave it blank)

I have seen a lot of discussion about what is a good value of X to pass to make when you are running

make -j X

Usually, people assume X ought to be a function of the number of cores on the system. In my project, I have found the best performance by omitting X and simply running

make -j

If you do not care to retain resources for other processes and simply want the quickest build, is there any reason to fix X?

like image 679
user31765 Avatar asked Feb 17 '14 01:02

user31765


People also ask

Why do I make arguments out of nothing?

The truth is, that while it might seem like you're arguing over nothing, this type of arguing is usually a sign of unresolved issues. If one or both partners has underlying anxieties or resentments about something, a simple misinterpreted comment can send them into defensiveness, and an argument will start.

Why you shouldn't leave during an argument?

Walk away without a resolution. So long as you feel physically and emotionally safe, aim to stay engaged, says Marks. In general, negative feelings only worsen if left unresolved, and marching off in the heat of the moment can be perceived as a decision to give up on your partner or their emotions.

Why is it important to have a clear argument?

A clear argument gives your essay structure Use a concise introduction to your academic essay to set out key points in your argument and very clearly show what the shape of the essay will look like. 2.

What is your purpose when you write an argument?

This form of writing may be challenging but it will strengthen your writing skills. An argument has two purposes: change people's points of view or persuade them to accept new points of view. persuade people to a particular action or new behavior.


2 Answers

It is possible that for your project using -j with no argument is the best solution. If you have a relatively few number of jobs that can be run in parallel, then it's fine.

However, resources are not infinite. Using -j alone tells make that it should run all the jobs that can possibly be built, without any consideration of system resources. It doesn't look at how many CPUs you have, how much memory you have, how high the load is on your system, or anything else.

So if you have a build system which is non-recursive, and/or contains hundreds or thousands of files that can be built in parallel (don't depend on each other), make will try to run them all at once. Just as when you try to do too many things at the same time in your system it slows way down and ends up taking longer than doing them a few at a time, so make running too many jobs will bring your system to its knees.

Try building the Linux kernel with -j, as an example, and see how that works for you :-).

like image 187
MadScientist Avatar answered Nov 15 '22 09:11

MadScientist


Update: will work with the '-l N' flag specified, see load-average flag

 { 'l', floating, &max_load_average, 1, 1, 0, &default_load_average,
      &default_load_average, "load-average" },

It looks like make tries not to consume too much resources though, see https://github.com/mirror/make/blob/master/src/job.c

  /* If we are running at least one job already and the load average
     is too high, make this one wait.  */
  if (!c->remote
      && ((job_slots_used > 0 && load_too_high ())
#ifdef WINDOWS32
          || (process_used_slots () >= MAXIMUM_WAIT_OBJECTS)
#endif
          ))
    {
      /* Put this child on the chain of children waiting for the load average
         to go down.  */
      set_command_state (f, cs_running);
      c->next = waiting_jobs;
      waiting_jobs = c;
      return 0;
    }

Comment to load_too_high():

/* Determine if the load average on the system is too high to start a new job.
   The real system load average is only recomputed once a second.  However, a
   very parallel make can easily start tens or even hundreds of jobs in a
   second, which brings the system to its knees for a while until that first
   batch of jobs clears out.

   To avoid this we use a weighted algorithm to try to account for jobs which
   have been started since the last second, and guess what the load average
   would be now if it were computed.

   This algorithm was provided by Thomas Riedl <[email protected]>,
   who writes:

!      calculate something load-oid and add to the observed sys.load,
!      so that latter can catch up:
!      - every job started increases jobctr;
!      - every dying job decreases a positive jobctr;
!      - the jobctr value gets zeroed every change of seconds,
!        after its value*weight_b is stored into the 'backlog' value last_sec
!      - weight_a times the sum of jobctr and last_sec gets
!        added to the observed sys.load.
!
!      The two weights have been tried out on 24 and 48 proc. Sun Solaris-9
!      machines, using a several-thousand-jobs-mix of cpp, cc, cxx and smallish
!      sub-shelled commands (rm, echo, sed...) for tests.
!      lowering the 'direct influence' factor weight_a (e.g. to 0.1)
!      resulted in significant excession of the load limit, raising it
!      (e.g. to 0.5) took bad to small, fast-executing jobs and didn't
!      reach the limit in most test cases.
!
!      lowering the 'history influence' weight_b (e.g. to 0.1) resulted in
!      exceeding the limit for longer-running stuff (compile jobs in
!      the .5 to 1.5 sec. range),raising it (e.g. to 0.5) overrepresented
!      small jobs' effects.

 */

#define LOAD_WEIGHT_A           0.25
#define LOAD_WEIGHT_B           0.25

Moreover as one can see job count is limited on Windows to MAXIMUM_WAIT_OBJECTS, which is 64.

like image 20
Andrew Selivanov Avatar answered Nov 15 '22 09:11

Andrew Selivanov