Various Python packages such as Numpy, Scipy and pandas can utilize OpenMP to run on multiple CPUs. As an example, let's run the python script python_openmp.py that calculates multiplicative inverse of five symmetric matrices of size 2000x2000.
OpenMP (Open Multiprocessing) is an API that supports multi-platform shared memory multiprocessing programming in C, C++, and Fortran, on most processor architectures and operating systems, including Solaris, AIX, HP-UX, GNU/Linux, Mac OS X, and Windows platforms.
OpenMP is typically used for loop-level parallelism, but it also supports function-level parallelism. This mechanism is called OpenMP sections. The structure of sections is straightforward and can be useful in many instances. Consider one of the most important algorithms in computer science, the quicksort.
What is OpenMP. OpenMP is an implementation of multithreading, a method of parallelizing whereby a master thread (a series of instructions executed consecutively) forks a specified number of slave threads and the system divides a task among them.
Cython has OpenMP support: With Cython, OpenMP can be added by using the prange
(parallel range) operator and adding the -fopenmp
compiler directive to setup.py.
When working in a prange stanza, execution is performed in parallel because we disable the global interpreter lock (GIL) by using the with nogil:
to specify the block where the GIL is disabled.
To compile cython_np.pyx we have to modify the setup.py script as shown below. We tell it to inform the C compiler to use -fopenmp
as an argument during compilation - to enable OpenMP and to link with the OpenMP libraries.
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
setup(
cmdclass = {"build_ext": build_ext},
ext_modules = [
Extension(
"calculate",
["cython_np.pyx"],
extra_compile_args = ["-fopenmp"],
extra_link_args = ["-fopenmp"]
)
]
)
With Cython’s prange,
we can choose different scheduling approaches. With static, the workload is distributed evenly across the available CPUs. However, as some of your calculation regions are expensive in time, while others are cheap - if we ask Cython to schedule the work chunks equally using static across the CPUs, then the results for some regions will complete faster than others and those threads will then sit idle.
Both the dynamic and guided schedule options attempt to mitigate this problem by allocating work in smaller chunks dynamically at runtime so that the CPUs are more evenly distributed when the workload’s calculation time is variable. Thus, for your code, the correct choice will vary depending on the nature of your workload.
Numba’s premium version, NumbaPro, has experimental support of a prange
parallelization operator for working with OpenMP.
Pythran (a Python-to-C++ compiler for a subset of Python) can take advantage of vectorization possibilities and of OpenMP-based parallelization possibilities, though it runs using Python 2.7 only. You specify parallel sections using pragma omp
directives (very similarly to Cython’s OpenMP support described above), e.g.:
The JIT Python compiler PyPy supports the multiprocessing module (see following) and has a project called PyPy-STM "a special in-development version of PyPy which can run multiple independent CPU-hungry threads in the same process in parallel".
OpenMP is a low-level interface to multiple cores. You may want to look at multiprocessing.
The multiprocessing
module works at a higher level, sharing Python data structures, while OpenMP works with C primitive objects (e.g., integers and floats) once you’ve compiled to C. It only makes sense to use OpenMP if you’re compiling your code; if you’re not compiling (e.g., if you’re using efficient numpy code and you want to run on many cores), then sticking with multiprocessing
is probably the right approach.
Due to GIL there is no point to use threads for CPU intensive tasks in CPython. You need either multiprocessing (example) or use C extensions that release GIL during computations e.g., some of numpy functions, example.
You could easily write C extensions that use multiple threads in Cython, example.
To the best of my knowledge, there is no OpenMP package for Python (and I don't know what it would do if there were one). If you want threads directly under your control, you will have to use one of the threading libraries. However, as pointed out by others, the GIL (Global Interpreter Lock) makes multi-threading in Python for performance a little... well, pointless*. The GIL means that only one thread can access the interpreter at a time.
I would suggest looking at NumPy/SciPy instead. NumPy lets you write Matlab-esque code where you are operating on arrays and matrices with single operations. It has some parallel processing capabilities as well, see the SciPy Wiki.
Other places to start looking:
* Ok, it isn't pointless, but unless the time is consumed outside of Python code (like by an external process invoked via popen
or some such), the threads aren't going to buy you anything other than convenience.
If you want to release GIL and use OpenMP ypu can take a look at Cython. It offers a simple parallelism for some common tasks. You can read more in Cython documentation.
Maybe your response is in Cython:
"Cython supports native parallelism through the cython.parallel module. To use this kind of parallelism, the GIL must be released (see Releasing the GIL). It currently supports OpenMP, but later on more backends might be supported." Cython Documentation
There is a package called pymp, which the author described it as a package that brings OpenMP-like functionality to Python. I have tried using it, but with different use case: file processing. It worked. I think it is quite simple to use. Below is a sample taken from the GitHub page:
import pymp
ex_array = pymp.shared.array((100,), dtype='uint8')
with pymp.Parallel(4) as p:
for index in p.range(0, 100):
ex_array[index] = 1
# The parallel print function takes care of asynchronous output.
p.print('Yay! {} done!'.format(index))
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With