The only advantage I can think of is compilation speed. The end result (binary size and speed) in both cases should be the same (unless static library was compiled without optimisations, of course).
Also some references would be appreciated.
Update: This question emerged when we had to include small 3rd party open source library in our project. One developer made a statement that including precompiled static library (instead of just copying source files) would increase performance of the App. I see no reason why this should be the case.
So the question is: would inclusion of precompiled library really improve performance of final App?
Another benefit of using static libraries is execution speed at run-time. Because the it's object code (binary) is already included in the executable file, multiple calls to functions can be handled much more quickly than a dynamic library's code, which needs to be called from files outside of the executable.
What are the differences between static and dynamic libraries? Static libraries, while reusable in multiple programs, are locked into a program at compile time. Dynamic, or shared libraries, on the other hand, exist as separate files outside of the executable file.
A static library (also known as an archive) consists of routines that are compiled and linked directly into your program. When you compile a program that uses a static library, all the functionality of the static library that your program uses becomes part of your executable.
The major disadvantages of static linking are increases in the memory required to run an executable, network bandwidth to transfer it, and disk space to store it.
If you're talking about a third party library, some advantages for them are: no need to release source code, (potentially) simpler installation for end developers... although sometimes it turns out to be more of a hassle, especially if it hasn't been done right (linking in other open source projects without fixing up symbols, wrong architectures supported).
If you mean just your own code - seems like you're just creating headaches for yourself. If the files aren't changing, they're going to be compiled on disk already (.o), and the compiler won't need to rebuild them unless you do a clean/rebuild all. So you likely won't gain compilation speed.
Either way - yes, the output should be the same. A statically linked library is just a collection of the same .o files you would have been linking to directly.
EDIT:
Specifically addressing speed of .o vs .a - .a is simply a collection of .o files for ease of packaging during development. Once linked in, the result is identical. I just did a quick sanity test to verify:
$ cat a.c
#include <stdio.h>
extern char *something();
int main()
{
printf("%s", something());
return 0;
}
$ cat b.c
char *something()
{
return "something fancy here\n";
}
$ gcc -c -o a.o a.c
$ gcc -c -o b.o b.c
$ gcc -o foo1 a.o b.o
$ ar -r b.a b.o
ar: creating archive b.a
$ gcc -o foo2 a.o b.a
$ cmp foo1 foo2
And there you have it, identical binaries by linking .o vs .a.
There is a slight performance hit if you use dynamic libraries instead of static libraries (I believe only when symbols are looked up). Perhaps this is what the other developer was referring to, that static libraries would be slightly faster than dynamic libraries.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With