Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Parallelise python loop with numpy arrays and shared-memory

I am aware of several questions and answers on this topic, but haven't found a satisfactory answer to this particular problem:

What is the easiest way to do a simple shared-memory parallelisation of a python loop where numpy arrays are manipulated through numpy/scipy functions?

I am not looking for the most efficient way, I just wanted something simple to implement that doesn't require a significant rewrite when the loop is not run in parallel. Just like OpenMP implements in lower level languages.

The best answer I've seen in this regard is this one, but this is a rather clunky way that requires one to express the loop into a function that takes a single argument, several lines of shared-array converting crud, seems to require that the parallel function is called from __main__, and it doesn't seem to work well from the interactive prompt (where I spend a lot of my time).

With all of Python's simplicity is this really the best way to parellelise a loop? Really? This is something trivial to parallelise in OpenMP fashion.

I have painstakingly read through the opaque documentation of the multiprocessing module, only to find out that it is so general that it seems suited to everything but a simple loop parallelisation. I am not interested in setting up Managers, Proxies, Pipes, etc. I just have a simple loop, fully parallel that doesn't have any communication between tasks. Using MPI to parallelise such a simple situation seems like overkill, not to mention it would be memory-inefficient in this case.

I haven't had time to learn about the multitude of different shared-memory parallel packages for Python, but was wondering if someone has more experience in this and can show me a simpler way. Please do not suggest serial optimisation techniques such as Cython (I already use it), or using parallel numpy/scipy functions such as BLAS (my case is more general, and more parallel).

like image 744
tiago Avatar asked Oct 25 '12 12:10

tiago


People also ask

Do NumPy arrays use less memory?

NumPy arrays are faster and more compact than Python lists. An array consumes less memory and is convenient to use. NumPy uses much less memory to store data and it provides a mechanism of specifying the data types. This allows the code to be optimized even further.

Can I iterate through NumPy array?

Iterating Arrays Iterating means going through elements one by one. As we deal with multi-dimensional arrays in numpy, we can do this using basic for loop of python. If we iterate on a 1-D array it will go through each element one by one.

Are NumPy arrays more memory efficient?

NumPy uses much less memory to store data The NumPy arrays takes significantly less amount of memory as compared to python lists. It also provides a mechanism of specifying the data types of the contents, which allows further optimisation of the code.

Is NumPy faster than loop?

Looping over Python arrays, lists, or dictionaries, can be slow. Thus, vectorized operations in Numpy are mapped to highly optimized C code, making them much faster than their standard Python counterparts.


2 Answers

With Cython parallel support:

# asd.pyx
from cython.parallel cimport prange

import numpy as np

def foo():
    cdef int i, j, n

    x = np.zeros((200, 2000), float)

    n = x.shape[0]
    for i in prange(n, nogil=True):
        with gil:
            for j in range(100):
                x[i,:] = np.cos(x[i,:])

    return x

On a 2-core machine:

$ cython asd.pyx
$ gcc -fPIC -fopenmp -shared -o asd.so asd.c -I/usr/include/python2.7
$ export OMP_NUM_THREADS=1
$ time python -c 'import asd; asd.foo()'
real    0m1.548s
user    0m1.442s
sys 0m0.061s

$ export OMP_NUM_THREADS=2
$ time python -c 'import asd; asd.foo()'
real    0m0.602s
user    0m0.826s
sys 0m0.075s

This runs fine in parallel, since np.cos (like other ufuncs) releases the GIL.

If you want to use this interactively:

# asd.pyxbdl
def make_ext(modname, pyxfilename):
    from distutils.extension import Extension
    return Extension(name=modname,
                     sources=[pyxfilename],
                     extra_link_args=['-fopenmp'],
                     extra_compile_args=['-fopenmp'])

and (remove asd.so and asd.c first):

>>> import pyximport
>>> pyximport.install(reload_support=True)
>>> import asd
>>> q1 = asd.foo()
# Go to an editor and change asd.pyx
>>> reload(asd)
>>> q2 = asd.foo()

So yes, in some cases you can parallelize just by using threads. OpenMP is just a fancy wrapper for threading, and Cython is therefore only needed here for the easier syntax. Without Cython, you can use the threading module --- works similarly as multiprocessing (and probably more robustly), but you don't need to do anything special to declare arrays as shared memory.

However, not all operations release the GIL, so YMMV for the performance.

***

And another possibly useful link scraped from other Stackoverflow answers --- another interface to multiprocessing: http://packages.python.org/joblib/parallel.html

like image 107
pv. Avatar answered Nov 14 '22 15:11

pv.


Using a mapping operation (in this case multiprocessing.Pool.map()) is more or less the the canonical way to paralellize a loop on a single machine. Unless and until the built-in map() is ever paralellized.

An overview of the different possibilities can be found here.

You can use openmp with python (or rather cython), but it doesn't look exactly easy.

IIRC, the point if only running multiprocessing stuff from __main__ is a neccesity because of compatibility with Windows. Since windows lacks fork(), it starts a new python interpreter and has to import the code in it.

Edit

Numpy can paralellize some operations like dot(), vdot() and innerproduct(), when configured with a good multithreading BLAS library like e.g. OpenBLAS. (See also this question.)

Since numpy array operations are mostly by element it seems possible to parallelize them. But this would involve setting up either a shared memory segment for python objects, or dividing the arrays up into pieces and feeding them to the different processes, not unlike what multiprocessing.Pool does. No matter what approach is taken, it would incur memory and processing overhead to manage all that. One would have to run extensive tests to see for which sizes of arrays this would actually be worth the effort. The outcome of those tests would probably vary considerable per hardware architecture, operating system and amount of RAM.

like image 4
Roland Smith Avatar answered Nov 14 '22 15:11

Roland Smith