Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Deadlock in concurrent.futures code

I've been trying to parallelise some code using concurrent.futures.ProcessPoolExecutor but have kept having strange deadlocks that don't occur with ThreadPoolExecutor. A minimal example:

from concurrent import futures

def test():
    pass

with futures.ProcessPoolExecutor(4) as executor:
    for i in range(100):
        print('submitting {}'.format(i))
        executor.submit(test)

In python 3.2.2 (on 64-bit Ubuntu), this seems to hang consistently after submitting all the jobs - and this seems to happen whenever the number of jobs submitted is greater than the number of workers. If I replace ProcessPoolExecutor with ThreadPoolExecutor it works flawlessly.

As an attempt to investigate, I gave each future a callback to print the value of i:

from concurrent import futures

def test():
    pass

with futures.ProcessPoolExecutor(4) as executor:
    for i in range(100):
        print('submitting {}'.format(i))
        future = executor.submit(test)

        def callback(f):
            print('callback {}'.format(i))
        future.add_done_callback(callback)

This just confused me even more - the value of i printed out by callback is the value at the time it is called, rather than at the time it was defined (so I never see callback 0 but I get lots of callback 99s). Again, ThreadPoolExecutor prints out the expected value.

Wondering if this might be a bug, I tried a recent development version of python. Now, the code at least seems to terminate, but I still get the wrong value of i printed out.

So can anyone explain:

  • what happened to ProcessPoolExecutor in between python 3.2 and the current dev version that apparently fixed this deadlock

  • why the 'wrong' value of i is being printed

EDIT: as jukiewicz pointed out below, of course printing i will print the value at the time the callback is called, I don't know what I was thinking... if I pass a callable object with the value of i as one of its attributes, that works as expected.

EDIT: a little bit more information: all of the callbacks are executed, so it looks like it is executor.shutdown (called by executor.__exit__) that is unable to tell that the processes have completed. This does seem to be completely fixed in the current python 3.3, but there seem to have been a lot of changes to multiprocessing and concurrent.futures, so I don't know what fixed this. Since I can't use 3.3 (it doesn't seem to be compatible with either the release or dev versions of numpy), I tried simply copying its multiprocessing and concurrent packages across to my 3.2 installation, which seems to work fine. Still, it seems a little weird that - as far as I can see - ProcessPoolExecutor is completely broken in the latest release version but nobody else is affected.

like image 647
James Avatar asked Feb 09 '12 15:02

James


1 Answers

I modified the code as follows, that solved both problems. callback function was defined as a closure, thus would use the updated value of i every time. As for deadlock, that's likely to be a cause of shutting down the Executor before all the task are complete. Waiting for the futures to complete solves that, too.

from concurrent import futures

def test(i):
    return i

def callback(f):
    print('callback {}'.format(f.result()))


with futures.ProcessPoolExecutor(4) as executor:
    fs = []
    for i in range(100):
        print('submitting {}'.format(i))
        future = executor.submit(test, i)
        future.add_done_callback(callback)
        fs.append(future)

    for _ in futures.as_completed(fs): pass

UPDATE: oh, sorry, I haven't read your updates, this seems have been solved already.

like image 192
bereal Avatar answered Oct 31 '22 16:10

bereal