I find that if there are a lot of classes the compilation time is dramatically increased when I use one *.h and one *.cpp file per class. I already use precompiled headers and incremental linking, but still the compile time is very long (yes I use boost ;)
So I came up with the following trick:
So instead of 100+ translation units I ended up with only 8 translation units. The compile time became 4-5 times shorter.
The downsides are that you have to manually include all the *.cpp files (but it's not really a maintenance nightmare since if you forget to include something the linker will remind you), and that some VS IDE conveniences are not working with this scheme, e.g. Go To/ Move to Implementation etc.
So the question is, is having lots of cpp translation units really the only true way? Is my trick a known pattern, or maybe I'm missing something? Thanks!
One significant drawback of this approach is caused by having one .obj file for each translation unit.
If you create a static library for reuse in other projects you will often face bigger binaries in those projects if you have several huge translation units instead of many small ones because the linker will only include the .obj files containing the functions/variables really referenced from within the project using the library.
In case of big translation units it's more likely that each unit is referenced and the corresponding .obj file is included. Bigger binaries may be a problem in some cases. It's also possible that some linkers are smart enough to only include the necessary functions/variables, not the whole .obj files.
Also if the .obj file is included and all the global variables are included too then their constructors/destructors will be called when the program is started/stopped which surely will take time.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With