Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Linking phase in distcc

Is there any particular reason that the linking phase when building a project with distcc is done locally rather than sent off to other computers to be done like compiling is? Reading the distcc whitepages didn't give a clear answer but I'm guessing that the time spent linking object files is not very significant compared to compilation. Any thoughts?

like image 629
Matthew Avatar asked Dec 21 '22 23:12

Matthew


1 Answers

The way that distcc works is by locally preprocessing the input files until a single file translation unit is created. That file is then sent over the network and compiled. At that stage the remote distcc server only needs a compiler, it does not even need the header files for the project. The output of the compilation is then moved back to the client and stored locally as an object file. Note that this means that not only linking, but also preprocessing is performed locally. That division of work is common to other build tools, like ccache (preprocessing is always performed, then it tries to resolve the input with previously cached results and if succeeds returns the binary without recompiling).

If you were to implement a distributed linker, you would have to either ensure that all hosts in the network have the exact same configuration, or else you would have to send all required inputs for the operation in one batch. That would imply that distributed compilation would produce a set of object files, and all those object files would have to be pushed over the network for a remote system to link and return the linked file. Note that this might require system libraries that a referred and present in the linker path, but not present in the linker command line, so a 'pre-link' would have to determine what set of libraries are actually required to be sent. Even if possible this would require the local system to guess/calculate all real dependencies and send them with a great impact in network traffic and might actually slow down the process, as the cost of sending might be greater than the cost of linking --if the cost of getting the dependencies is not itself almost as expensive as linking.

The project I am currently working on has a single statically linked executable of over 100M. The static libraries range in size but if a distributed system would consider that the final executable was to be linked remotely it would require probably three to five times as much network traffic as the final executable (templates, inlines... all these appear in all translation units that include them, so there would be multiple copies flying around the network).

like image 117
David Rodríguez - dribeas Avatar answered Dec 26 '22 08:12

David Rodríguez - dribeas