I have a few callback functions and I'd like to launch as multiple processes and have them all terminate via signal from the parent process.
My current way of doing this is creating a shared c_bool with multiprocessing.Value
and setting it to True
, then distributing it to all of my processes when they are created. My processes all run a while loop using the shared bool like so:
while myC_bool: ...keep running...
I can then just switch the bool to False
from my parent process and all child processes will complete their final loop and exit.
I've been told by many people, and have read in the docs that one should try avoid using shared memory when using multiprocessing. I was told the best way to avoid this is to daemonize the process, give it a custom signal handler and send it a sigint/sigterm/etc...
My question is, is exclusively using the bool to keep a loop alive and ONLY ever alter it's value from my parent process, and read it from multiple child processes a suitable solution to make all of my child processes terminate quickly and safely? I feel like there is less overhead for all the children to just look at the one shared bool, than to send x number of sigints to them.
Would daemonizing be a better solution? If so I'd like some help understanding why.
Terminating processes in Python We can kill or terminate a process immediately by using the terminate() method. We will use this method to terminate the child process, which has been created with the help of function, immediately before completing its execution.
For killing a child process after a given timeout, we can use the timeout command. It runs the command passed to it and kills it with the SIGTERM signal after the given timeout. In case we want to send a different signal like SIGINT to the process, we can use the –signal flag.
os. kill(): This method in Python is used to send a specified signal to the process with specified process id.
The possible start methods are 'fork', 'spawn' and 'forkserver'. On Windows only 'spawn' is available. On Unix 'fork' and 'spawn' are always supported, with 'fork' being the default.
There are a lot of good reasons to go with your solution:
… and so on.
Keep in mind that, unless you can prove to yourself that multiprocessing
and the underlying OS primitives, on every platform you care about, are guaranteed to work without synchronization here, you need to put a Lock
or something else around every access to the shared bool. That isn't exactly complicated, but… once you've done that, using, e.g., an Event
without the shared bool might be even simpler.
At any rate, if any of those were your reason, I'd say great, do it that way. But according to your question, you actually chose this because of performance:
I feel like there is less overhead for all the children to just look at the one shared bool, than to send x number of sigints to them
If that's your reason, you're almost certainly wrong. The children have to look at the shared bool (and acquire the shared lock!) every time through some loop, while a signal only has to be sent to each child once. So, your overhead is almost certainly going to be much higher this way.
But really, I can't imagine the overhead of sending one signal per child process, or even grabbing an interprocess lock once per loop per process, is anywhere close to a bottleneck in any useful program, so… why does the overhead even matter here in the first place? Do what makes the most sense in the most simple way.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With