There are several discussions on forums about shared vs. static libraries regarding performance. But how do those approaches compare to compiling the code altogether?
In my case, I have a class (the evaluation code) that contains a few methods that contain several for loops and that will be called several times by a method from another class (the evaluator code). I have not finished implementing and testing everything yet. But, for the sake of performance, I am wondering if I should compile all the files altogether (compiler optimization advantages?), or compile some files separately to generate static or shared libraries.
These approaches will depend on your compiler and options:
Not using libraries: A good compiler, and build system will cache results and this should be just as fast as the other two. In practice, many code bases have less than optimal compartmentalization leading to slow compile times, the classic approach is to break the thing apart into libraries.
Static: This might be slower than dynamic linking because there is an opportunity to run link time optimization (LTO), which could take a while
Dynamic: Might be slower when you have a small number of functions because of specifics on how dynamic loading is implemented.
In conclusion, unless you're working on some monster project where you're worried about people truking up the build system, keep it all in one project and avoid needlessly complicated debugging.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With