My colleague claims that we should dissect our C++ application (C++, Linux) into shared libraries to improve code modularity, testability and reuse.
From my point of view it's a burden since the code we write does not need to be shared between applications on the same machine neither to be dynamically loaded or unloaded and we can simply link a monolithic executable application.
Furthermore, wrapping C++ classes with C-function interfaces IMHO makes it uglier.
I also think single-file application will be much more easy to upgrade remotely at a customer's site.
Should dynamic libraries be used when there is no need to share binary code between applications and no dynamic code loading?
As shared libraries cannot be directly executed, they need to be linked into a system executable or callable shared object. Hence, shared libraries are searched for by the system linker during the link process. This means that a shared library name must always start with the prefix lib and have the extension . so (or .
Shared libraries are the most common way to manage dependencies on Linux systems. These shared resources are loaded into memory before the application starts, and when several processes require the same library, it will be loaded only once on the system. This feature saves on memory usage by the application.
The most significant advantage of shared libraries is that there is only one copy of code loaded in memory, no matter how many processes are using the library. For static libraries each process gets its own copy of the code. This can lead to significant memory wastage.
Programs that use shared libraries are usually slower than those that use statically-linked libraries. A more subtle effect is a reduction in "locality of reference." You may be interested in only a few of the routines in a library, and these routines may be scattered widely in the virtual address space of the library.
I'd say that splitting code into shared libraries to improve without having any immediate goal in mind is a sign of a buzzwords-infested development environment. It is better to write code that can easily be split at some point.
But why would you need to wrap C++ classes into C-function interfaces, except for, maybe, for object creation?
Also, splitting into shared libraries here sounds like an interpreted language mindset. In compiled languages you try not to postpone till runtime what you can do at compile-time. Unnecessary dynamic linking is exactly the case.
Enforcing shared libraries ensures that libraries doesn't have circular dependencies. Using shared libraries often leads to faster linkage and link errors are discovered at an earlier stage than if there isn't any linking before the final application is linked. If you want to avoid shipping multiple files to customers you can consider linking the application dynamically in your development environment and statically when creating release builds.
EDIT: I don't really see a reason to why you need to wrap your C++ classes using C interfaces - this is done behind the scenes. On Linux you can use shared libraries without any special handling. On Windows, however, you would need to ___declspec(export) and ___declspec(import).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With