Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why doesn't pipe.close() cause EOFError during pipe.recv() in python multiprocessing?

Tags:

I am sending simple objects between processes using pipes with Python's multiprocessing module. The documentation states that if a pipe has been closed, calling pipe.recv() should raise EOFError. Instead, my program is just blocking on recv() and never detects that the pipe has been closed.

Example:

import multiprocessing as m

def fn(pipe):
    print "recv:", pipe.recv()
    print "recv:", pipe.recv()

if __name__ == '__main__':
    p1, p2 = m.Pipe()
    pr = m.Process(target=fn, args=(p2,))
    pr.start()

    p1.send(1)
    p1.close()  ## should generate EOFError in remote process

And the output looks like:

recv: 1
<blocks here>

Can anyone tell me what I'm doing wrong? I have this problem on Linux and windows/cygwin, but not with the windows native Python.

like image 707
Luke Avatar asked Jul 03 '11 17:07

Luke


2 Answers

The forked (child) process is inheriting a copy of its parent's file descriptors. So even though the parent calls "close" on p1, the child still has a copy open and the underlying kernel object is not being released.

To fix, you need to close the "write" side of the pipe in the child, like so:

def fn(pipe):
    p1.close()
    print "recv:", pipe.recv()
    print "recv:", pipe.recv()
like image 197
Nemo Avatar answered Sep 17 '22 11:09

Nemo


From this solution I've observed that os.close(pipe.fileno()) could immediately break the pipe where pipe.close() doesn't until all processes/sub-processes end. You could try that instead. Warning: You cannot use pipe.close() after, but pipe.closed stills return "false". So you could do this to be cleaner:

os.close(pipe.fileno())
pipe=open('/dev/null')
pipe.close()
like image 21
Le Droid Avatar answered Sep 17 '22 11:09

Le Droid