Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Python: Non-Blocking + Non defunct process

I would like to create a parent process, which will create many child process. Since the parent process is responsible to create the child process, the parent process would not care the status of the childs.

Since subprocess.call is blocking, it doesn't work. Therefore I use subprocess.Popen to replace call. However Popen will generate zombie (defunct) process once the child terminate(Link).

Is there a way to solve this problem?

Thanks in advance

like image 259
Winston Avatar asked May 29 '13 07:05

Winston


2 Answers

There are a lot of ways to deal with this. The key point is that zombie / "defunct" processes exist so that the parent process can collect their statuses.

  1. As the creator of the process, you can announce your intent to ignore the status. The POSIX method is to set the flag SA_NOCLDWAIT (using sigaction). This is a bit of a pain to do in Python; but most Unix-like systems allow you to simply ignore SIGCHLD / SIGCLD (the spelling varies from one Unix-like system to another), which is easy to do in Python:

    import signal

    signal.signal(signal.SIGCHLD, signal.SIG_IGN)

  2. Or, if this is not available for some reason or does not work on your system, you can use an old stand-by trick: don't just fork once, fork twice. In the first child, fork a second child; in the second child, use execve (or similar) to run the desired program; and then in the first child, exit (with _exit). In the original parent, use wait or waidpid or whatever the OS provides, and collect the status of the first child.

    The reason this works is that the second child has now become an "orphan" (its parent, the first child, died and was collected by your original process). As an orphan it is handed over to a proxy parent (specifically, to "init") which is always wait-ing and hence collects all the zombies right away.

  3. In addition to the double fork, you can make your sub-processes live in their own separate session and/or give up controlling terminal access ("daemonize", in Unix-y terms). (This is a bit messy and OS-dependent; I've coded it before but for some corporate code I don't have access to now.)

  4. Finally, you could simply collect those processes periodically. If you're using the subprocess module, simply call the .poll function on each process, whenever it seems convenient. This will return None if the process is still running, and the exit status (having collected it) if it has finished. If some are still running, your main program can exit anyway while they keep running; at that point, they become orphaned, as in method #2 above.

The "ignore SIGCHLD" method is simple and easy but has the drawback of interfering with library routines that create and wait-for sub-processes. There's a work-around in Python 2.7 and later (http://bugs.python.org/issue15756) but it means the library routines can't see any failures in those sub-processes.

[Edit: http://bugs.python.org/issue1731717 is for p.wait(), where p is a process from subprocess.Popen; 15756 is specifically for p.poll(); but in any case if you don't have the fixes, you have to resort to methods 2, 3, or 4.]

like image 199
torek Avatar answered Oct 12 '22 00:10

torek


After terminating or killing a process the operating system waits for the parent process to collect the child process status. You can use the process' communicate() method to collect the status:

p = subprocess.Popen( ... )
p.terminate()
p.communicate()

Note that terminating a process allows the process to intercept the terminate signal and do whatever it wants to do with it. This is crucial since p.communicate() is a blocking call.

If you do not wish this behavior use p.kill() instead of p.terminate() which lets the process not intercept the signal.

If you want to use p.terminate() and be sure the process ended itself you can use the psutil module to check on the process status.

like image 22
Waschbaer Avatar answered Oct 12 '22 02:10

Waschbaer