Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Makefile profiling

So I have this Makefile based build system that my users feel is working too slowly. For the sake of this question lets define performance as the time it takes make to figure out what it should actually do.

I can see some avenues for optimization --

  • Reducing the number of times Makefile is parsed and the DAG recalculated due to including a Makefile fragment.
  • Reducing the number of going to an external Makefile with make -C
  • Reducing variable expansions
  • etc.

-- however I want to know first where are my bottlenecks. Since optimization without profiling is a waste of life, I want to ask: How to profile a Makefile?

Assume that the system I inherited is fairly well designed, i.e. it already implements the most common tricks of the trade: (mostly) non recursive make, ccache, precompiled headers, auto generated header dependencies etc).

... and just to preempt some of the possible answer. I know that there might be faster and better build systems then GNU make - (Personally, I am eagerly waiting to see what the CMake folks will come up with regards to the Ninja system) - but unfortunately swapping build system is not in the cards.

like image 512
Chen Levy Avatar asked Mar 23 '11 14:03

Chen Levy


5 Answers

Since you're interested in the time it takes Make to decide what to do, rather than do it, you should look into options for getting Make to not actually do things:

  • -q (question) will have it simply decide what has to be done, do nothing, print nothing, but return an exit status code indicating whether anything has to be done. You could simply time this for any target you're interested in (including "all").
  • -n (no-op) will have it print the recipes, rather than execute them. Watching them scroll by will give you a general sense of how Make is spending its time, and if you like you can do clever piping tricks to time the process.
  • -t (touch) will have it touch the target files that need to be rebuilt, instead of actually rebuilding them. A script could then look at the update times on the files, find the big gaps and tell you which targets required a lot of forethought.

EDIT:

I WAS WRONG.

Make constructs the DAG and decides which targets must be rebuilt before it rebuilds any of them. So once it starts executing rules, printing recipes or touching files, the part of the job we're interested in is over, and the observable timing is worthless.So the -n and -t options are no good, but -q is still useful as a coarse tool. Also -d will tell you Make's thought process; it won't tell you timing, but it will indicate which targets require many steps to consider.

like image 79
Beta Avatar answered Sep 22 '22 15:09

Beta


There have been a couple efforts to add profiling to GNU make. See for example https://github.com/eddyp/make-profiler

My recent entry foray into this has been to extend remake to output information using the valgrind callgrind format; then either kcachegrind or gprof2dot can be used for visualization. Right now, check out the profiling branch in github. Or see this kcachegrind screenshot.

It is all still a work in progress, any help would be appreciated. Help can be things like how to capture things better — there is a lot of information — or how to notate it better in the callgrind format, as well improving what gets the C code doing the profiling. So you don't necessarily have to be a C programmer to help out.

like image 21
rocky Avatar answered Sep 24 '22 15:09

rocky


I don't think there is any way to profile the Makefiles themselves.

You could do something though: for a null build (everything is up to date), run top-level make under strace -tt -fv and see which parts of the tree, which recursive submakes, which file accesses, etc. take unexpectedly long.

Computed variables (var := $(shell ...)), repeated NFS file stat calls, etc. often make make slow.

like image 27
Employed Russian Avatar answered Sep 24 '22 15:09

Employed Russian


This is work, but I would get the source of Make, build it with debugging information, and run it under gdb and randomly-pause it during the time you're waiting for it. That would show what it's doing and why. It would probably be necessary to look at more than the call stack - to look at the internal data structure as well, because Make is an interpreter. Since Make calls itself as a subordinate application, that can make the job harder. I would have to figure out how to debug a subordinate call.

Since it is so slow, one (1) sample has a very good probability of showing you the problem. If you want more certainty, do it several times.

And don't worry about optimization level - the bottlenecks are probably much bigger than that.

like image 38
Mike Dunlavey Avatar answered Sep 23 '22 15:09

Mike Dunlavey


It depends on what you try to measure and how complex your Makefiles are.

If you just want to measure the parsing, invoke a trivial target without dependencies. But this doesn't work well if you have target-specific rules (e.g. due to the use of $(eval)).

if you also want to measure the inferencing process (which I believe is much faster) I don't see a way to invoke make to achieve that.

If you also want to include the forking of the shell that executes the commands, things become easy again: set $(SHELL) to something that surrounds the actual shell command execution with timing information.

Another idea is to run the make source code itself under a profiler or to add timing messages.

like image 34
reinierpost Avatar answered Sep 25 '22 15:09

reinierpost