This multiprocessing code works as expected. It creates 4 Python processes, and uses them to print the numbers 0 through 39, with a delay after each print.
import multiprocessing import time def job(num): print num time.sleep(1) pool = multiprocessing.Pool(4) lst = range(40) for i in lst: pool.apply_async(job, [i]) pool.close() pool.join()
However, when I try to use a multiprocessing.Lock to prevent multiple processes from printing to standard out, the program just exits immediately without any output.
import multiprocessing import time def job(lock, num): lock.acquire() print num lock.release() time.sleep(1) pool = multiprocessing.Pool(4) l = multiprocessing.Lock() lst = range(40) for i in lst: pool.apply_async(job, [l, i]) pool.close() pool.join()
Why does the introduction of a multiprocessing.Lock make this code not work?
Update: It works when the lock is declared globally (where I did a few non-definitive tests to check that the lock works), as opposed to the code above which passes the lock as an argument (Python's multiprocessing documentation shows locks being passed as arguments). The code below has a lock declared globally, as opposed to passing as an argument in the code above.
import multiprocessing import time l = multiprocessing.Lock() def job(num): l.acquire() print num l.release() time.sleep(1) pool = multiprocessing.Pool(4) lst = range(40) for i in lst: pool.apply_async(job, [i]) pool.close() pool.join()
Only one process can acquire the lock at a time, and then once acquired, blocks and then reenters the same lock again to report the done message via the report() function.
A process can be killed by calling the Process. kill() function. The call will only terminate the target process, not child processes. The method is called on the multiprocessing.
The Pool class in multiprocessing can handle an enormous number of processes. It allows you to run multiple jobs per process (due to its ability to queue the jobs). The memory is allocated only to the executing processes, unlike the Process class, which allocates memory to all the processes.
In multiprocessing, a pipe is a connection between two processes in Python. It is used to send data from one process which is received by another process. Under the covers, a pipe is implemented using a pair of connection objects, provided by the multiprocessing. connection. Connection class.
If you change pool.apply_async
to pool.apply
, you get this exception:
Traceback (most recent call last): File "p.py", line 15, in <module> pool.apply(job, [l, i]) File "/usr/lib/python2.7/multiprocessing/pool.py", line 244, in apply return self.apply_async(func, args, kwds).get() File "/usr/lib/python2.7/multiprocessing/pool.py", line 558, in get raise self._value RuntimeError: Lock objects should only be shared between processes through inheritance
pool.apply_async
is just hiding it. I hate to say this, but using a global variable is probably the simplest way for your example. Let's just hope the velociraptors don't get you.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With