Based on my experiments I'm guessing the answer to this is no. But perhaps it might be possible with some changes to the futures module.
I would like to submit a worker that itself creates an executor and submits work. I want to return that second future to the main process. I have this MWE, which does not work because the f2
object likely becomes disassociated from its parent executor when it is sent over via multiprocessing. (It does work if both executors are ThreadPoolExecutor, because the f2
object is never copied).
from concurrent.futures import ProcessPoolExecutor, ThreadPoolExecutor
import time
def job1():
try:
ex2 = ThreadPoolExecutor()
time.sleep(2)
f2 = ex2.submit(job2)
finally:
ex2.shutdown(wait=False)
return f2
def job2():
time.sleep(2)
return 'done'
try:
ex1 = ProcessPoolExecutor()
f1 = ex1.submit(job1)
finally:
ex1.shutdown(wait=False)
print('f1 = {!r}'.format(f1))
f2 = f1.result()
print('f1 = {!r}'.format(f1))
print('f2 = {!r}'.format(f2))
My question is: Is there any safe way that I might send a future object across a multiprocessing Pipe and be able to receive the value when it is finished. It seems like I might need to set up another executor-like construct that listens for results over another Pipe.
Future (*, loop=None) A Future represents an eventual result of an asynchronous operation. Not thread-safe. Future is an awaitable object. Coroutines can await on Future objects until they either have a result or an exception set, or until they are cancelled.
ThreadPoolExecutor is an Executor subclass that uses a pool of threads to execute calls asynchronously. An Executor subclass that uses a pool of at most max_workers threads to execute calls asynchronously. All threads enqueued to ThreadPoolExecutor will be joined before the interpreter can exit.
My current setup is Ubuntu 16.04.5 LTS and Python 3.6.5. I received the following error when running the code posted above:
f2 = f1.result()
followed by
concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.
Looking through the Python documentation, I found under 17.4.3. ProcessPoolExecutor that
"calling Executor or Future methods from a callable submitted to a ProcessPoolExecutor will result in deadlock."
Further, under
class concurrent.futures.ProcessPoolExecutor(max_workers=None)
I also found the following:
"changed in version 3.3: when one of the worker processes terminates abruptly, a BrokenProcessPool error is now raised. Previously, behaviour was undefined but operations on the executor or its futures would often freeze or deadlock."
Further, by definition the ProcessPoolExecutor turns off the Global Interpreter Lock, opening it to access by other processes.
To answer your question on whether any safe way is available to send a future object across a multiprocessing pipe and receive it on the other end, I would say no with the standard library.
References:
https://docs.python.org/3.6/library/concurrent.futures.html#processpoolexecutor
https://docs.python.org/3.6/glossary.html#term-global-interpreter-lock
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With