Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Proper way to use multiprocessor.Pool in a nested loop

Tags:

I am using the multiprocessor.Pool() module to speed up an "embarrassingly parallel" loop. I actually have a nested loop, and am using multiprocessor.Pool to speed up the inner loop. For example, without parallelizing the loop, my code would be as follows:

outer_array=[random_array1]
inner_array=[random_array2]
output=[empty_array]    

for i in outer_array:
    for j in inner_array:
        output[j][i]=full_func(j,i)

With parallelizing:

import multiprocessing
from functools import partial

outer_array=[random_array1]
inner_array=[random_array2]
output=[empty_array]    

for i in outer_array:
    partial_func=partial(full_func,arg=i)     
    pool=multiprocessing.Pool() 
    output[:][i]=pool.map(partial_func,inner_array)
    pool.close()

My main question is if this is the correct, and I should be including the multiprocessing.Pool() inside the loop, or if instead I should create the pool outside loop, i.e.:

pool=multiprocessing.Pool() 
for i in outer_array:
     partial_func=partial(full_func,arg=i)     
     output[:][i]=pool.map(partial_func,inner_array)

Also, I am not sure if I should include the line "pool.close()" at the end of each loop in the second example above; what would be the benefits of doing so?

Thanks!

like image 623
shadowprice Avatar asked Dec 04 '13 22:12

shadowprice


Video Answer


2 Answers

Ideally, you should call the Pool() constructor exactly once - not over & over again. There are substantial overheads when creating worker processes, and you pay those costs every time you invoke Pool(). The processes created by a single Pool() call stay around! When they finish the work you've given to them in one part of the program, they stick around, waiting for more work to do.

As to Pool.close(), you should call that when - and only when - you're never going to submit more work to the Pool instance. So Pool.close() is typically called when the parallelizable part of your main program is finished. Then the worker processes will terminate when all work already assigned has completed.

It's also excellent practice to call Pool.join() to wait for the worker processes to terminate. Among other reasons, there's often no good way to report exceptions in parallelized code (exceptions occur in a context only vaguely related to what your main program is doing), and Pool.join() provides a synchronization point that can report some exceptions that occurred in worker processes that you'd otherwise never see.

Have fun :-)

like image 135
Tim Peters Avatar answered Sep 28 '22 05:09

Tim Peters


import itertools
import multiprocessing as mp

def job(params):
    a = params[0]
    b = params[1]
    return a*b

def multicore():
    a = range(1000)
    b = range(2000)
    paramlist = list(itertools.product(a,b))
    print(paramlist[0])
    pool = mp.Pool(processes = 4)
    res=pool.map(job, paramlist)
    for i in res:
        print(i)

if __name__=='__main__':
    multicore()

how about this?

like image 34
user9500792 Avatar answered Sep 28 '22 05:09

user9500792