This code runs fine under regular CPython 3.5:
import concurrent.futures
def job(text):
print(text)
with concurrent.futures.ProcessPoolExecutor(1) as pool:
pool.submit(job, "hello")
But if you run it as python -m doctest myfile.py
, it hangs. Changing submit(job
to submit(print
makes it not hang, as does using ThreadPoolExecutor
instead of ProcessPoolExecutor
.
Why does it hang when run under doctest?
So I think the issue is because of your with
statement. When you have below
with concurrent.futures.ProcessPoolExecutor(1) as pool:
pool.submit(job, "hello")
It enforces the thread to be executed and closed then an there itself. When you run this as main process it works and gives time for thread to execute the job. But when you import
it as a module then it doesn't give the background thread a chance and the shutdown
on the pool waits for the work to be executed and hence a deadlock
So the workaround that you can use is below
import concurrent.futures
def job(text):
print(text)
pool = concurrent.futures.ProcessPoolExecutor(1)
pool.submit(job, "hello")
if __name__ == "__main__":
pool.shutdown(True)
This will prevent the deadlock
and will let you run doctest
as well as import
the module if you want
The problem is that importing a module acquires a lock (which lock depends on your python version), see the docs for imp.lock_held
.
Locks are shared over multiprocessing so your deadlock occurs because your main process, while it is importing your module, loads and waits for a subprocess which attempts to import your module, but can't acquire the lock to import it because it is currently being imported by your main process.
In step form:
myfile.py
myfile.py
(it has to import myfile.py
because that is where your job()
function is defined, which is why it didn't deadlock for print()
).myfile.py
=> Deadlock.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With