Is there any software available in linux which compiles a source code containing large number of files parallely on either multicore or distributed systems. Libraries like gcc or xserver takes very time for compilation on unicore/dual machine and most of the times it is frustrating when you need lot of recompilation. Is there any technique for compiling such source code parallely ?
And for 1000 files, each core of the processor can happily compile one file at a time, keeping all cores totally busy. Tip: "make" uses multiple cores if you give it the right command line option. Without that it will compile one file after the other on a 16 core system.
distcc is designed to speed up compilation by taking advantage of unused processing power on other computers. A machine with distcc installed can send code to be compiled across the network to a computer which has the distccd daemon and a compatible compiler installed. distcc works as an agent for the compiler.
On distributed-memory systems, you can use distcc to farm out compile jobs to other machines. This takes a little bit of setup, but it can really speed up your build if you happen to have some extra machines around.
On shared-memory multicore systems, you can just use make -j
, which will try to spawn build jobs based on the dependencies in your makefiles. You can run like this:
$ make -j
which will impose no limit on the number of jobs spawned, or you can run with an integer parameter:
$ make -j8
which will limit the number of concurrent build jobs. Here, the limit is 8 concurrent jobs. Usually you want this to be something close to the number of cores on your system.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With