Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

#include all .cpp files into a single compilation unit?

I recently had cause to work with some Visual Studio C++ projects with the usual Debug and Release configurations, but also 'Release All' and 'Debug All', which I had never seen before.

It turns out the author of the projects has a single ALL.cpp which #includes all other .cpp files. The *All configurations just build this one ALL.cpp file. It is of course excluded from the regular configurations, and regular configurations don't build ALL.cpp

I just wondered if this was a common practice? What benefits does it bring? (My first reaction was that it smelled bad.)

What kinds of pitfalls are you likely to encounter with this? One I can think of is if you have anonymous namespaces in your .cpps, they're no longer 'private' to that cpp but now visible in other cpps as well?

All the projects build DLLs, so having data in anonymous namespaces wouldn't be a good idea, right? But functions would be OK?

like image 638
Steve Folly Avatar asked Feb 12 '09 22:02

Steve Folly


4 Answers

It's referred to by some (and google-able) as a "Unity Build". It links insanely fast and compiles reasonably quickly as well. It's great for builds you don't need to iterate on, like a release build from a central server, but it isn't necessarily for incremental building.

And it's a PITA to maintain.

EDIT: here's the first google link for more info: http://buffered.io/posts/the-magic-of-unity-builds/

The thing that makes it fast is that the compiler only needs to read in everything once, compile out, then link, rather than doing that for every .cpp file.

Bruce Dawson has a much better write up about this on his blog: http://randomascii.wordpress.com/2014/03/22/make-vc-compiles-fast-through-parallel-compilation/

like image 190
MSN Avatar answered Sep 22 '22 13:09

MSN


Unity builds improved build speeds for three main reasons. The first reason is that all of the shared header files only need to be parsed once. Many C++ projects have a lot of header files that are included by most or all CPP files and the redundant parsing of these is the main cost of compilation, especially if you have many short source files. Precompiled header files can help with this cost, but usually there are a lot of header files which are not precompiled.

The next main reason that unity builds improve build speeds is because the compiler is invoked fewer times. There is some startup cost with invoking the compiler.

Finally, the reduction in redundant header parsing means a reduction in redundant code-gen for inlined functions, so the total size of object files is smaller, which makes linking faster.

Unity builds can also give better code-gen.

Unity builds are NOT faster because of reduced disk I/O. I have profiled many builds with xperf and I know what I'm talking about. If you have sufficient memory then the OS disk cache will avoid the redundant I/O - subsequent reads of a header will come from the OS disk cache. If you don't have enough memory then unity builds could even make build times worse by causing the compiler's memory footprint to exceed available memory and get paged out.

Disk I/O is expensive, which is why all operating systems aggressively cache data in order to avoid redundant disk I/O.

like image 28
Bruce Dawson Avatar answered Sep 21 '22 13:09

Bruce Dawson


I wonder if that ALL.cpp is attempting to put the entire project within a single compilation unit, to improve the ability for the compiler to optimize the program for size?

Normally some optimizations are only performed within distinct compilation units, such as removal of duplicate code and inlining.

That said, I seem to remember that recent compilers (Microsoft's, Intel's, but I don't think this includes GCC) can do this optimization across multiple compilation units, so I suspect that this 'trick' is unneccessary.

That said, it would be curious to see if there is indeed any difference.

like image 42
Arafangion Avatar answered Sep 23 '22 13:09

Arafangion


I agree with Bruce; from my experience I had tried implementing the Unity Build for one of my .dll projects which had a ton of header includes and lots of .cpps; to bring down the overall Compilation time on the VS2010(had already exhausted the Incremental Build options) but rather than cutting down the Compilation time, I ran out of memory and the Build not even able to finish the Compilation.

However to add; I did find enabling the Multi-Processor Compilation option in Visual Studio quite helps in cutting down the Compilation time; I am not sure if such an option is available across other platform compilers.

like image 31
spforay Avatar answered Sep 22 '22 13:09

spforay