I m trying to accelerate my code so I use :
pool = Pool(cpu_count()-1)
print('Start multi process')
res = pool.map(max_def_in_circle, range(len(dataT)), All_index_in_c)
with All_index_in_c is a list of list : here a preview
[[2, 12], [11, 25, 26, 27, 28, 29], [0, 3, 12], [], [21, 22, 44, 45, 46], .... ]
and max_def_in_circle a very basic function which works fine
But when I run this I got this error message :
<ipython-input-18-3bd316855b1c> in <module>
6 # pool.join()
7 print('Start multi process')
----> 8 res = pool.map(max_def_in_circle, range(len(dataT)), All_index_in_c)
/usr/lib/python3.8/multiprocessing/pool.py in map(self, func, iterable, chunksize)
362 in a list that is returned.
363 '''
--> 364 return self._map_async(func, iterable, mapstar, chunksize).get()
365
366 def starmap(self, func, iterable, chunksize=None):
/usr/lib/python3.8/multiprocessing/pool.py in _map_async(self, func, iterable, mapper, chunksize, callback, error_callback)
483
484 task_batches = Pool._get_tasks(func, iterable, chunksize)
--> 485 result = MapResult(self, chunksize, len(iterable), callback,
486 error_callback=error_callback)
487 self._taskqueue.put(
/usr/lib/python3.8/multiprocessing/pool.py in __init__(self, pool, chunksize, length, callback, error_callback)
795 self._value = [None] * length
796 self._chunksize = chunksize
--> 797 if chunksize <= 0:
798 self._number_left = 0
799 self._event.set()
TypeError: '<=' not supported between instances of 'list' and 'int'
But When I did my search on this problem, several people seem able to use list in pool.map I don't understand why, maybe it is a version problem ? If someone can help me, Thanks
As @DarrylG mentioned, pool.map supports functions of one argument only. Use pool.starmap to pass multiple arguments to max_def_in_circle, e.g.
res = pool.starmap(max_def_in_circle, zip(range(len(dataT)), All_index_in_c))
The zip function will output the entries in each list as pairs. See here for details.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With