I want to write a library that manages child processes with asyncio. I don't want to force my callers to be asynchronous themselves, so I'd prefer to get a new_event_loop
, do a run_until_complete
, and then close
it. Ideally I'd like to do this without conflicting with any other asyncio stuff the caller might be doing.
My problem is that waiting on subprocesses doesn't work unless you call set_event_loop
, which attaches the internal watcher. But of course if I do that, I might conflict with other event loops in the caller. A workaround is to cache the caller's current loop (if any), and then call set_event_loop
one more time when I'm done to restore the caller's state. That almost works. But if the caller is not an asyncio user, a side effect of calling get_event_loop
is that I've now created a global loop that didn't exist before, and Python will print a scary warning if the program exits without calling close
on that loop.
The only meta-workaround I can think of is to do an atexit.register
callback that closes the global loop. That won't conflict with the caller because close is safe to call more than once, unless the caller has done something crazy like trying to start the global loop during exit. So it's still not perfect.
Is there a perfect solution to this?
What you're trying to achieve looks very much like ProcessPoolExecutor (in concurrent.futures).
Asyncronous caller:
@coroutine
def in_process(callback, *args, executor=ProcessPoolExecutor()):
loop = get_event_loop()
result = yield from loop.run_in_executor(executor, callback, *args)
return result
Synchronous caller:
with ProcessPoolExecutor() as executor:
future = executor.submit(callback, *args)
result = future.result()
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With