Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why are shared and static libraries different things?

For an application developer the difference between shared (.so) and static (.a) libraries is entirely a difference in how you use them - roughly speaking whether the library code you need is copied into your program, or just referenced from your program then loaded at run time.

Conceptually (and naively) it seems there could just be one kind of library. Static versus dynamic linking would be an option you select when building your own application. What are the technical differences between .so and .a that require this choice to be made when building the library, not when building your application?

An analogy: At a restaurant, you may order food to stay or to go, but this is your choice of how to "use" the food; the chef cooks you the same hamburger.

like image 733
Ray in NY Avatar asked Dec 20 '22 06:12

Ray in NY


1 Answers

So I see lots of answers talking about why you would want to use shared libraries instead of static libraries, but I think your question is why they are even distinct things nowadays, i.e. why isn't it possible to use a shared library as a static library and pull what you need out of it at build time?

Here are some reasons. Some of these are historical - keep in mind that something as fundamental as binary formats changes very slowly in computer systems.

Compiled Differently

Code can be compiled either to be dependent on the address it sits at (position-dependent) or independent (position-independent). This affects things like loads of global constants, function calls, etc. Position-dependent code needs fixups if it isn't loaded at the address it expects, i.e. the loader has to go over the code and actually change offsets.

For executables, this isn't a problem. An executable is the first thing that is loaded into the address space, so it will always be loaded at the same address. You generally don't need any fixups. But a shared library is used by different executables, by different processes. Multiple libraries can conflict: if they expect to be at overlapping address ranges, one will have to budge. When it does, and it is position-dependent, it needs to be fixed by the loader. But now you have process-specific changes in the library code, which means the code can't be shared (at runtime) with other processes anymore. You lose one of the big benefits of shared libraries.

If the shared library uses position-independent code (PIC), it doesn't need fixups. So PIC is good for shared libraries. On the other hand, PIC is slower on some architectures (notably x86, but not x64), so compiling executables as PIC is a waste of resources.

Executables were therefore usually compiled as position-dependent code, while shared libraries were compiled as position-independent code. If you used shared libraries as sources for code directly pulled into executables, you get PIC. If you want PDC, you need a separate code repository, and that's a static library.

Of course, on most modern architectures, PIC isn't less efficient than PDC, and security techniques like address space randomization make it useful to compile executables as PIC too, so this is more of a historical reason than a current one.

Contain Different Things

But there's another, more current reason for separating static and shared libraries, and that's link-time optimization.

Basically, the more information an optimizer has about a program, the better it can reason about it. Classical optimizer worked on a per-module basis: compile a .c file, optimize it, generate object code. The linker took all the object files and merged them together. This means that the optimizer can only reason about one module at a time. It cannot look into the called functions that are outside the module in order to reason about them, or even simply inline them.

In modern toolchains, however, the compiler often works differently. Instead of compiling and optimizing a module and then producing object code, it takes a module, produces an intermediate form, possibly optimizes it a bit, and then puts the intermediate form into the object file. The linker, instead of just merging object files and resolving references, actually merges the intermediate representation and then invokes the optimizer and code generator on the merged form. With much more information available, the optimizer can do a vastly better job.

This intermediate representation is more detailed, more faithful to the original code than machine code. You want this for your compilation process. You don't want to ship it to the customer, because it is much bigger, and if you use a closed-source model also because it is much easier to reverse-engineer. Moreover, there's no point in shipping it, because the loader doesn't understand it, and you don't want to re-optimize and recompile your program at startup time anyway (JIT languages aside).

Thus, a shared library contains real object code. A static library, on the other hand, is a good container for intermediate code, because it is consumed by the linker. This is a key difference between static and shared libraries.

Linkage Model

Finally, we have another semi-historical reason: linkage.

Linkage defines how a symbol (a variable or function name) is visible outside a code unit. The C language defines two linkages: internal (not visible outside the compilation unit, i.e. static) and external (visible to the whole program, i.e. extern). You generally have a lot of externally visible symbols.

Shared libraries, however, have their symbols resolved at load time, and this should be fast. Fewer symbols means lookup in the symbol table is faster. Of course this was more relevant when computers were slower, but it still can have a noticeable effect. It also affects the size of the libraries.

Therefore, object file specifications used by the operating systems (ELF for *nix, PE/COFF for Windows) defined separate visibilities for shared libraries. Instead of making everything that's external in C visible, you have the option to specify the visible functions explicitly. (In Windows, only things annotated as __declspec(dllexport), or listed in a .def file are exported from a DLL. In Linux, everything extern is exported by default, but you can use __attribute__((visibility("hidden"))) to not do that, or you can specify the -fvisibility=hidden command line switch or the visibility pragma to override the default.)

The end result is that a shared library throws away all symbol information except for the exported functions.

A static library has no need to throw away any symbol information. What's more, you don't want to do that, because carefully specifying which functions are exported and which aren't is some work, and you don't want to have to do that work unless necessary. If you're using static libraries, it isn't necessary.

So a shippable shared library should minimize its exported symbols in order to be fast and small. This makes it less useful as a code repository for static linking, where you may want a greater selection of functions to link in, especially once the interface functions get inlined (see link-time optimization above).

like image 155
Sebastian Redl Avatar answered Dec 28 '22 08:12

Sebastian Redl