Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

GCC build time doesn't benefit much from precompiled headers

I have a huge project, something about 150 000 LOC of C++ code. Build time is something about 15 minutes. This project consists of many sub-projects of different sizes.

I have built separate precompiled headers for each subproject, but when I use them build time stays roughly the same. It seemed that build time is 5-10% percent less, not more.

Precompiled headers is definitely used, I use -Winvalid-pch option and I have tried to compile with -H compiler option, my precompiled headers appears in output with 'bang' symbol, that means that compiler is able to use precompiled header.

All my precompiled headers is not very large, every file is something about 50Mb. I use python script, found here to generate list of most used precompiled headers so my list of precompilation candidates is quite good.

Is there any free/open source tools for build optimization? It seemed that standard make utility doesn't have ability to measure build times of different targets. I can't find the way to get the statistics for different targets with make. I'm not talking about dependency analysis or something advanced. I just want to know for what targets most of the time was wasted.

Also, it seemed that GCC is quite inefficient in dealing with precompiled headers. I was unable to get any subproject build notably faster, maximum speedup that I get is 20% on a project that gets three minutes to build. It seemed that it is easier and cheaper to buy faster machine with solid state drive than to optimize build time on linux with GCC.

like image 819
Evgeny Lazin Avatar asked Nov 09 '12 13:11

Evgeny Lazin


People also ask

Are precompiled headers worth it?

The advantage of precompiled headers is that the huge system library header files and other files that do not change at all or infrequently only have to be parsed once per build. As all C compilers (that I know of) work on every .

Do not use precompiled headers?

To turn off precompiled headersSelect the Configuration properties > C/C++ > Precompiled Headers property page. In the property list, select the drop-down for the Precompiled Header property, and then choose Not Using Precompiled Headers. Choose OK to save your changes.

How does a precompiled header work?

The precompiled header is compiled only when it, or any files it includes, are modified. If you only make changes in your project source code, the build will skip compilation for the precompiled header. The compiler options for precompiled headers are /Y .


2 Answers

GCC build time doesn't benefit much from precompiled headers

Yes, unfortunately that's often true,

There are some experimental projects to do something better, see http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3426.html and http://gcc.gnu.org/wiki/pph, but they're not usable yet.

I agree with the other answer that 15 minutes for 150KLOC is quite slow.

I've found that using the Gold linker makes a huge difference to build times, I highly recommend it.

You could also consider ccache which can help, and if you have spare cycles on other machines distcc

Avoid building on slow disks, certainly avoid networked disks. Avoid recursive invocations make, which spend more time reading makefiles and recreating dependency graphs. If you can structure your sub-project makefiles so they can all be included by a single top-level makefile, a non-recursive make will take a little longer to get started but will fly once it starts building targets. That can be a lot of work to rewrite makefiles though.

And it probably goes without saying, but build on a multicore machine and use make -j N where a good rule of thumb is that N should be twice the number of cores, or more if the compilation is I/O bound.

like image 120
Jonathan Wakely Avatar answered Oct 19 '22 08:10

Jonathan Wakely


If you want to get the most out of this feature, you need to understand how your projects can be structured to make good use of them. The best way is the slow, hard process of manually reducing build times. Sounds really stupid at first, but if all builds going forward are 5 times faster and you know how to structure your projects and dependencies moving forward -- then you realize the payoff.

You can setup a continuous integration system with your targets to measure and record your progress/improvements as your changes come in.

I have a huge project, something about 150 000 LOC of C++ code. Build time is something about 15 minutes. This project consists of many sub-projects of different sizes.

Sounds like it's doing a lot of redundant work, assuming you have a modern machine.

Also consider link times.

All my precompiled headers is not very large, every file is something about 50Mb.

That's pretty big, IMO.

I'm not talking about dependency analysis or something advanced.

Again, Continuous Integration for stats. For a build that slow, excessive dependencies are very likely the issue (unless you have many many small cpp files, or something silly like physical memory exhaustion is occurring).

I was unable to get any subproject build notably faster, maximum speedup that I get is 20%

Understand your structures and dependencies. PCHs slow down most of my projects.

It seemed that it is easier and cheaper to buy faster machine with solid state drive than to optimize build time on linux with GCC.

Chances are, that machine will not make your build times 20x faster, but fixing up your dependencies and project structures can make it 20x faster (or whatever the root of the problem ultimately is). The machine helps only so much (considering the build time for 150KSLOC).

Your build is probably CPU/memory bound.

like image 28
justin Avatar answered Oct 19 '22 06:10

justin