I'm wondering about the way that python's Multiprocessing.Pool class works with map, imap, and map_async. My particular problem is that I want to map on an iterator that creates memory-heavy objects, and don't want all these objects to be generated into memory at the same time. I wanted to see if the various map() functions would wring my iterator dry, or intelligently call the next() function only as child processes slowly advanced, so I hacked up some tests as such:
def g(): for el in xrange(100): print el yield el def f(x): time.sleep(1) return x*x if __name__ == '__main__': pool = Pool(processes=4) # start 4 worker processes go = g() g2 = pool.imap(f, go) g2.next()
And so on with map, imap, and map_async. This is the most flagrant example however, as simply calling next() a single time on g2 prints out all my elements from my generator g(), whereas if imap were doing this 'lazily' I would expect it to only call go.next() once, and therefore print out only '1'.
Can someone clear up what is happening, and if there is some way to have the process pool 'lazily' evaluate the iterator as needed?
Thanks,
Gabe
Let's look at the end of the program first.
The multiprocessing module uses atexit
to call multiprocessing.util._exit_function
when your program ends.
If you remove g2.next()
, your program ends quickly.
The _exit_function
eventually calls Pool._terminate_pool
. The main thread changes the state of pool._task_handler._state
from RUN
to TERMINATE
. Meanwhile the pool._task_handler
thread is looping in Pool._handle_tasks
and bails out when it reaches the condition
if thread._state: debug('task handler found thread._state != RUN') break
(See /usr/lib/python2.6/multiprocessing/pool.py)
This is what stops the task handler from fully consuming your generator, g()
. If you look in Pool._handle_tasks
you'll see
for i, task in enumerate(taskseq): ... try: put(task) except IOError: debug('could not put task on queue') break
This is the code which consumes your generator. (taskseq
is not exactly your generator, but as taskseq
is consumed, so is your generator.)
In contrast, when you call g2.next()
the main thread calls IMapIterator.next
, and waits when it reaches self._cond.wait(timeout)
.
That the main thread is waiting instead of calling _exit_function
is what allows the task handler thread to run normally, which means fully consuming the generator as it put
s tasks in the worker
s' inqueue
in the Pool._handle_tasks
function.
The bottom line is that all Pool
map functions consume the entire iterable that it is given. If you'd like to consume the generator in chunks, you could do this instead:
import multiprocessing as mp import itertools import time def g(): for el in xrange(50): print el yield el def f(x): time.sleep(1) return x * x if __name__ == '__main__': pool = mp.Pool(processes=4) # start 4 worker processes go = g() result = [] N = 11 while True: g2 = pool.map(f, itertools.islice(go, N)) if g2: result.extend(g2) time.sleep(1) else: break print(result)
I had this problem too and was disappointed to learn that map consumes all its elements. I coded a function which consumes the iterator lazily using the Queue data type in multiprocessing. This is similar to what @unutbu describes in a comment to his answer but as he points out, suffers from having no callback mechanism for re-loading the Queue. The Queue datatype instead exposes a timeout parameter and I've used 100 milliseconds to good effect.
from multiprocessing import Process, Queue, cpu_count from Queue import Full as QueueFull from Queue import Empty as QueueEmpty def worker(recvq, sendq): for func, args in iter(recvq.get, None): result = func(*args) sendq.put(result) def pool_imap_unordered(function, iterable, procs=cpu_count()): # Create queues for sending/receiving items from iterable. sendq = Queue(procs) recvq = Queue() # Start worker processes. for rpt in xrange(procs): Process(target=worker, args=(sendq, recvq)).start() # Iterate iterable and communicate with worker processes. send_len = 0 recv_len = 0 itr = iter(iterable) try: value = itr.next() while True: try: sendq.put((function, value), True, 0.1) send_len += 1 value = itr.next() except QueueFull: while True: try: result = recvq.get(False) recv_len += 1 yield result except QueueEmpty: break except StopIteration: pass # Collect all remaining results. while recv_len < send_len: result = recvq.get() recv_len += 1 yield result # Terminate worker processes. for rpt in xrange(procs): sendq.put(None)
This solution has the advantage of not batching requests to Pool.map. One individual worker can not block others from making progress. YMMV. Note that you may want to use a different object to signal termination for the workers. In the example, I've used None.
Tested on "Python 2.7 (r27:82525, Jul 4 2010, 09:01:59) [MSC v.1500 32 bit (Intel)] on win32"
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With