I am sending simple objects between processes using pipes with Python's multiprocessing module. The documentation states that if a pipe has been closed, calling pipe.recv() should raise EOFError. Instead, my program is just blocking on recv() and never detects that the pipe has been closed.
Example:
import multiprocessing as m
def fn(pipe):
print "recv:", pipe.recv()
print "recv:", pipe.recv()
if __name__ == '__main__':
p1, p2 = m.Pipe()
pr = m.Process(target=fn, args=(p2,))
pr.start()
p1.send(1)
p1.close() ## should generate EOFError in remote process
And the output looks like:
recv: 1
<blocks here>
Can anyone tell me what I'm doing wrong? I have this problem on Linux and windows/cygwin, but not with the windows native Python.
The forked (child) process is inheriting a copy of its parent's file descriptors. So even though the parent calls "close" on p1
, the child still has a copy open and the underlying kernel object is not being released.
To fix, you need to close the "write" side of the pipe in the child, like so:
def fn(pipe):
p1.close()
print "recv:", pipe.recv()
print "recv:", pipe.recv()
From this solution I've observed that os.close(pipe.fileno())
could immediately break the pipe where pipe.close() doesn't until all processes/sub-processes end. You could try that instead.
Warning: You cannot use pipe.close() after, but pipe.closed stills return "false". So you could do this to be cleaner:
os.close(pipe.fileno())
pipe=open('/dev/null')
pipe.close()
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With