Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

When to call .join() on a process?

I am reading various tutorials on the multiprocessing module in Python, and am having trouble understanding why/when to call process.join(). For example, I stumbled across this example:

nums = range(100000) nprocs = 4  def worker(nums, out_q):     """ The worker function, invoked in a process. 'nums' is a         list of numbers to factor. The results are placed in         a dictionary that's pushed to a queue.     """     outdict = {}     for n in nums:         outdict[n] = factorize_naive(n)     out_q.put(outdict)  # Each process will get 'chunksize' nums and a queue to put his out # dict into out_q = Queue() chunksize = int(math.ceil(len(nums) / float(nprocs))) procs = []  for i in range(nprocs):     p = multiprocessing.Process(             target=worker,             args=(nums[chunksize * i:chunksize * (i + 1)],                   out_q))     procs.append(p)     p.start()  # Collect all results into a single result dict. We know how many dicts # with results to expect. resultdict = {} for i in range(nprocs):     resultdict.update(out_q.get())  # Wait for all worker processes to finish for p in procs:     p.join()  print resultdict 

From what I understand, process.join() will block the calling process until the process whose join method was called has completed execution. I also believe that the child processes which have been started in the above code example complete execution upon completing the target function, that is, after they have pushed their results to the out_q. Lastly, I believe that out_q.get() blocks the calling process until there are results to be pulled. Thus, if you consider the code:

resultdict = {} for i in range(nprocs):     resultdict.update(out_q.get())  # Wait for all worker processes to finish for p in procs:     p.join() 

the main process is blocked by the out_q.get() calls until every single worker process has finished pushing its results to the queue. Thus, by the time the main process exits the for loop, each child process should have completed execution, correct?

If that is the case, is there any reason for calling the p.join() methods at this point? Haven't all worker processes already finished, so how does that cause the main process to "wait for all worker processes to finish?" I ask mainly because I have seen this in multiple different examples, and I am curious if I have failed to understand something.

like image 890
Justin Avatar asked Jan 20 '13 21:01

Justin


People also ask

What does process join () do?

The join method blocks the execution of the main process until the process whose join method is called terminates. Without the join method, the main process won't wait until the process gets terminated. The example calls the join on the newly created process.

Which is the method used to change the default way to create child processes in multiprocessing?

Python provides the ability to create and manage new processes via the multiprocessing. Process class. In multiprocessing programming, we may need to change the technique used to start child processes. This is called the start method.

How do you do multiprocessing in Python?

In this example, at first we import the Process class then initiate Process object with the display() function. Then process is started with start() method and then complete the process with the join() method. We can also pass arguments to the function using args keyword.


1 Answers

Try to run this:

import math import time from multiprocessing import Queue import multiprocessing  def factorize_naive(n):     factors = []     for div in range(2, int(n**.5)+1):         while not n % div:             factors.append(div)             n //= div     if n != 1:         factors.append(n)     return factors  nums = range(100000) nprocs = 4  def worker(nums, out_q):     """ The worker function, invoked in a process. 'nums' is a         list of numbers to factor. The results are placed in         a dictionary that's pushed to a queue.     """     outdict = {}     for n in nums:         outdict[n] = factorize_naive(n)     out_q.put(outdict)  # Each process will get 'chunksize' nums and a queue to put his out # dict into out_q = Queue() chunksize = int(math.ceil(len(nums) / float(nprocs))) procs = []  for i in range(nprocs):     p = multiprocessing.Process(             target=worker,             args=(nums[chunksize * i:chunksize * (i + 1)],                   out_q))     procs.append(p)     p.start()  # Collect all results into a single result dict. We know how many dicts # with results to expect. resultdict = {} for i in range(nprocs):     resultdict.update(out_q.get())  time.sleep(5)  # Wait for all worker processes to finish for p in procs:     p.join()  print resultdict  time.sleep(15) 

And open the task-manager. You should be able to see that the 4 subprocesses go in zombie state for some seconds before being terminated by the OS(due to the join calls):

enter image description here

With more complex situations the child processes could stay in zombie state forever(like the situation you was asking about in an other question), and if you create enough child-processes you could fill the process table causing troubles to the OS(which may kill your main process to avoid failures).

like image 176
Bakuriu Avatar answered Oct 17 '22 00:10

Bakuriu