I've successfully implemented a multiprocessed script on Windows, but the same script launches a "RuntimeError: already started" on linux and stops the execution. The script consists of the following "main.py" (omitted some part for readability):
from multiprocessing import freeze_support
if __name__ == '__main__':
#MULTIPROCESSING STUFF
freeze_support()
#DO SOME STUFF
#Call my multiprocessing-function in other module
mod2.func(tileTS[0], label, areaconst)
And the "mod2.py" module:
import numpy as np
from multiprocessing import Pool
from functools import partial
import os, time
def func(ts, label, areaconst):
#SETTING UP/LOADING SOME VARIABLES
for idx in totImgs:
img_ = myList[idx]
p = Pool(2)
result = p.map( partial(_mp_avg, var1=var1_, img=img_), range(totObjs) )
p.close()
p.join()
#MANAGE RESULTING VARIABLES
return None
def _mp_avg(idx, img, var1):
num = idx + 1
arr = img[var1==num]
if np.isnan(arr).any():
return np.nan
else:
return np.sum( arr )
This error is launched when the script executes the "Pool.map" function/class (dunno tbh). The same code works flawlessly on Windows.
I'm using Ubuntu 18.04 and launching the python 3.6.7 script from Visual Studio Code.
EDIT: added screenshot of runtime error(s)
The multiprocessing package offers both local and remote concurrency, effectively side-stepping the Global Interpreter Lock by using subprocesses instead of threads. Due to this, the multiprocessing module allows the programmer to fully leverage multiple processors on a given machine. It runs on both Unix and Windows.
We can check if a process is alive via the multiprocessing. Process. is_alive() method.
Daemon processes in Python Python multiprocessing module allows us to have daemon processes through its daemonic option. Daemon processes or the processes that are running in the background follow similar concept as the daemon threads. To execute the process in the background, we need to set the daemonic flag to true.
Python provides a mutual exclusion lock for use with processes via the multiprocessing. Lock class. An instance of the lock can be created and then acquired by processes before accessing a critical section, and released after the critical section. Only one process can have the lock at any time.
As pointed out by @Darkonaut, Visual Studio Code uses ptvsd as debugger, which isn't fork-save (https://github.com/Microsoft/ptvsd/issues/1046#issuecomment-443339930). Since on linux the default process spawn method is "os.fork()", the script will generate a RuntimeError if executed from within VSCode. This will not happen on Windows. Solutions on Linux are:
Change start-method by inserting once the following line after the main function call:
multiprocessing.set_start_method("spawn")
Edit code with VSCode and launch from Terminal.
Change IDE.
Wait for fork-save debugger update, which is supposedly under work.
Check the following link for further information about the problem: https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With