And for 1000 files, each core of the processor can happily compile one file at a time, keeping all cores totally busy. Tip: "make" uses multiple cores if you give it the right command line option. Without that it will compile one file after the other on a 16 core system.
Type 'msconfig' into the Windows Search Box and hit Enter. Select the Boot tab and then Advanced options. Check the box next to Number of processors and select the number of cores you want to use (probably 1, if you are having compatibility issues) from the menu. Select OK and then Apply.
GCC uses a single instance of rtl_data class, representing the current function being compiled in RTL. So far, this should not be a problem as RTL expansion and optimization phase is still single-threaded.
The -g option instructs the compiler to generate debugging information during compilation. In C++, the -g option turns on debugging and turns off inlining of functions. The- g0 (zero) option turns on debugging and does not affect inlining of functions.
You can do this with make - with gnu make it is the -j flag (this will also help on a uniprocessor machine).
For example if you want 4 parallel jobs from make:
make -j 4
You can also run gcc in a pipe with
gcc -pipe
This will pipeline the compile stages, which will also help keep the cores busy.
If you have additional machines available too, you might check out distcc, which will farm compiles out to those as well.
There is no such flag, and having one runs against the Unix philosophy of having each tool perform just one function and perform it well. Spawning compiler processes is conceptually the job of the build system. What you are probably looking for is the -j (jobs) flag to GNU make, a la
make -j4
Or you can use pmake or similar parallel make systems.
People have mentioned make
but bjam
also supports a similar concept. Using bjam -jx
instructs bjam to build up to x
concurrent commands.
We use the same build scripts on Windows and Linux and using this option halves our build times on both platforms. Nice.
make
will do this for you. Investigate the -j
and -l
switches in the man page. I don't think g++
is parallelizable.
If using make, issue with -j
. From man make
:
-j [jobs], --jobs[=jobs] Specifies the number of jobs (commands) to run simultaneously. If there is more than one -j option, the last one is effective. If the -j option is given without an argument, make will not limit the number of jobs that can run simultaneously.
And most notably, if you want to script or identify the number of cores you have available (depending on your environment, and if you run in many environments, this can change a lot) you may use ubiquitous Python function cpu_count()
:
https://docs.python.org/3/library/multiprocessing.html#multiprocessing.cpu_count
Like this:
make -j $(python3 -c 'import multiprocessing as mp; print(int(mp.cpu_count() * 1.5))')
If you're asking why 1.5
I'll quote user artless-noise in a comment above:
The 1.5 number is because of the noted I/O bound problem. It is a rule of thumb. About 1/3 of the jobs will be waiting for I/O, so the remaining jobs will be using the available cores. A number greater than the cores is better and you could even go as high as 2x.
distcc can also be used to distribute compiles not only on the current machine, but also on other machines in a farm that have distcc installed.
I'm not sure about g++, but if you're using GNU Make then "make -j N" (where N is the number of threads make can create) will allow make to run multple g++ jobs at the same time (so long as the files do not depend on each other).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With