Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What's the difference between python's multiprocessing and concurrent.futures?

Tags:

A simple way of implementing multiprocessing in python is

from multiprocessing import Pool

def calculate(number):
    return number

if __name__ == '__main__':
    pool = Pool()
    result = pool.map(calculate, range(4))

An alternative implementation based on futures is

from concurrent.futures import ProcessPoolExecutor

def calculate(number):
    return number

with ProcessPoolExecutor() as executor:
    result = executor.map(calculate, range(4))

Both alternatives do essentially the same thing, but one striking difference is that we don't have to guard the code with the usual if __name__ == '__main__' clause. Is this because the implementation of futures takes care of this or us there a different reason?

More broadly, what are the differences between multiprocessing and concurrent.futures? When is one preferred over the other?

EDIT: My initial assumption that the guard if __name__ == '__main__' is only necessary for multiprocessing was wrong. Apparently, one needs this guard for both implementations on windows, while it is not necessary on unix systems.

like image 360
David Zwicker Avatar asked Jul 22 '14 19:07

David Zwicker


People also ask

Is concurrent futures better than multiprocessing?

This provides us with a simple model for parallel execution on a multi-core machine. While concurrent futures provide a simpler interface, it is slower and less flexible when compared with using multiprocessing for parallel execution.

What is Python concurrent futures?

The concurrent. futures module provides a high-level interface for asynchronously executing callables. The asynchronous execution can be performed with threads, using ThreadPoolExecutor , or separate processes, using ProcessPoolExecutor .

What is Python multiprocessing?

multiprocessing is a package that supports spawning processes using an API similar to the threading module. The multiprocessing package offers both local and remote concurrency, effectively side-stepping the Global Interpreter Lock by using subprocesses instead of threads.

Is multiprocessing concurrent?

Both multithreading and multiprocessing allow Python code to run concurrently. Only multiprocessing will allow your code to be truly parallel. However, if your code is IO-heavy (like HTTP requests), then multithreading will still probably speed up your code.


1 Answers

You actually should use the if __name__ == "__main__" guard with ProcessPoolExecutor, too: It's using multiprocessing.Process to populate its Pool under the covers, just like multiprocessing.Pool does, so all the same caveats regarding picklability (especially on Windows), etc. apply.

I believe that ProcessPoolExecutor is meant to eventually replace multiprocessing.Pool, according to this statement made by Jesse Noller (a Python core contributor), when asked why Python has both APIs:

Brian and I need to work on the consolidation we intend(ed) to occur as people got comfortable with the APIs. My eventual goal is to remove anything but the basic multiprocessing.Process/Queue stuff out of MP and into concurrent.* and support threading backends for it.

For now, ProcessPoolExecutor is mostly doing the exact same thing as multiprocessing.Pool with a simpler (and more limited) API. If you can get away with using ProcessPoolExecutor, use that, because I think it's more likely to get enhancements in the long-term. Note that you can use all the helpers from multiprocessing with ProcessPoolExecutor, like Lock, Queue, Manager, etc., so needing those isn't a reason to use multiprocessing.Pool.

There are some notable differences in their APIs and behavior though:

  1. If a Process in a ProcessPoolExecutor terminates abruptly, a BrokenProcessPool exception is raised, aborting any calls waiting for the pool to do work, and preventing new work from being submitted. If the same thing happens to a multiprocessing.Pool it will silently replace the process that terminated, but the work that was being done in that process will never be completed, which will likely cause the calling code to hang forever waiting for the work to finish.

  2. If you are running Python 3.6 or lower, support for initializer/initargs is missing from ProcessPoolExecutor. Support for this was only added in 3.7).

  3. There is no support in ProcessPoolExecutor for maxtasksperchild.

  4. concurrent.futures doesn't exist in Python 2.7, unless you manually install the backport.

  5. If you're running below Python 3.5, according to this question, multiprocessing.Pool.map outperforms ProcessPoolExecutor.map. Note that the performance difference is very small per work item, so you'll probably only notice a large performance difference if you're using map on a very large iterable. The reason for the performance difference is that multiprocessing.Pool will batch the iterable passed to map into chunks, and then pass the chunks to the worker processes, which reduces the overhead of IPC between the parent and children. ProcessPoolExecutor always (or by default, starting in 3.5) passes one item from the iterable at a time to the children, which can lead to much slower performance with large iterables, due to the increased IPC overhead. The good news is this issue is fixed in Python 3.5, as the chunksize keyword argument has been added to ProcessPoolExecutor.map, which can be used to specify a larger chunk size when you know you're dealing with large iterables. See this bug for more info.

like image 62
dano Avatar answered Sep 29 '22 20:09

dano