Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

GCC/Make Build Time Optimizations

We have project which uses gcc and make files. Project also contains of one big subproject (SDK) and a lot of relatively small subprojects which use that SDK and some shared framework.

We use precompiled headers, but that helps only for re-compilation to be faster.

Is there any known techniques and tools to help with build-time optimizations? Or maybe you know some articles/resources about this or related topics?

like image 334
inazaruk Avatar asked Apr 02 '09 08:04

inazaruk


2 Answers

You can tackle the problem from two sides: refactor the code to reduce the complexity the compiler is seeing, or speed up the compiler execution.

Without touching the code, you can add more compilation power into it. Use ccache to avoid recompiling files you have already compiled and distcc to distribute the build time among more machines. Use make -j where N is the number of cores+1 if you compile locally, or a bigger number for distributed builds. That flag will run more than one compiler in parallel.

Refactoring the code. Prefer forward declaration to includes (simple). Decouple as much as you can to avoid dependencies (use the PIMPL idiom).

Template instantiation is expensive, they are recompiled in every compilation unit that uses them. If you can refactor your templates as to forward declare them and then instantiate them in only one compilation unit.

like image 156
David Rodríguez - dribeas Avatar answered Oct 02 '22 07:10

David Rodríguez - dribeas


The best I can think of with make is the -j option. This tells make to run as many jobs as possible in parallel:

make -j

If you want to limit the number of concurrent jobs to n you can use:

make -j n


Make sure the dependencies are correct so make doesn't run jobs it doesn't have to.


Another thing to take into account is optimizations that gcc does with the -O switch. You can specify various levels of optimization. The higher the optimization, the longer the compile and link times. A project I work with runs takes 2 minutes to link with -O3, and half a minute with -O1. You should make sure you're not optimizing more than you need to. You could build without optimization for development builds and with optimization for deployment builds.


Compiling with debug info (gcc -g) will probably increase the size of your executable and may impact your build time. If you don't need it, try removing it to see if it affects you.


The type of linking (static vs. dynamic) should make a difference. As far as I understand static linking takes longer (though I may be wrong here). You should see if this affects your build.

like image 27
Nathan Fellman Avatar answered Oct 02 '22 06:10

Nathan Fellman