I have some python automation, which spawns telnet
sessions that I log with the linux script
command; there are two script
process IDs (a parent and child) for each logging session.
I need to solve a problem where if the python automation script dies, the script
sessions never close on their own; for some reason this is much harder than it should be.
So far, I have implemented watchdog.py
(see bottom of the question), which daemonizes itself, and polls the python automation script's PID in a loop. When it sees the python automation PID disappear from the server's process table, it attempts to kill the script
sessions.
My problem is:
script
sessions always spawn two separate processes, one of the script
sessions is the parent of the other script
session.watchdog.py
will not kill the child script
sessions, if I start script
sessions from the automation script (see AUTOMATION EXAMPLE, below)reproduce_bug.py
)import pexpect as px
from subprocess import Popen
import code
import time
import sys
import os
def read_pid_and_telnet(_child, addr):
time.sleep(0.1) # Give the OS time to write the PIDFILE
# Read the PID in the PIDFILE
fh = open('PIDFILE', 'r')
pid = int(''.join(fh.readlines()))
fh.close()
time.sleep(0.1)
# Clean up the PIDFILE
os.remove('PIDFILE')
_child.expect(['#', '\$'], timeout=3)
_child.sendline('telnet %s' % addr)
return str(pid)
pidlist = list()
child1 = px.spawn("""bash -c 'echo $$ > PIDFILE """
"""&& exec /usr/bin/script -f LOGFILE1.txt'""")
pidlist.append(read_pid_and_telnet(child1, '10.1.1.1'))
child2 = px.spawn("""bash -c 'echo $$ > PIDFILE """
"""&& exec /usr/bin/script -f LOGFILE2.txt'""")
pidlist.append(read_pid_and_telnet(child2, '10.1.1.2'))
cmd = "python watchdog.py -o %s -k %s" % (os.getpid(), ','.join(pidlist))
Popen(cmd.split(' '))
print "I started the watchdog with:\n %s" % cmd
time.sleep(0.5)
raise RuntimeError, "Simulated script crash. Note that script child sessions are hung"
Now the example of what happens when I run the automation above... note that PID 30017 spawns 30018 and PID 30020 spawns 30021. All the aforementioned PIDs are script
sessions.
[mpenning@Hotcoffee Network]$ python reproduce_bug.py
I started the watchdog with:
python watchdog.py -o 30016 -k 30017,30020
Traceback (most recent call last):
File "reproduce_bug.py", line 35, in <module>
raise RuntimeError, "Simulated script crash. Note that script child sessions are hung"
RuntimeError: Simulated script crash. Note that script child sessions are hung
[mpenning@Hotcoffee Network]$
After I run the automation above, all the child script
sessions are still running.
[mpenning@Hotcoffee Models]$ ps auxw | grep script
mpenning 30018 0.0 0.0 15832 508 ? S 12:08 0:00 /usr/bin/script -f LOGFILE1.txt
mpenning 30021 0.0 0.0 15832 516 ? S 12:08 0:00 /usr/bin/script -f LOGFILE2.txt
mpenning 30050 0.0 0.0 7548 880 pts/8 S+ 12:08 0:00 grep script
[mpenning@Hotcoffee Models]$
I am running the automation under Python 2.6.6, on a Debian Squeeze linux system (uname -a: Linux Hotcoffee 2.6.32-5-amd64 #1 SMP Mon Jan 16 16:22:28 UTC 2012 x86_64 GNU/Linux
).
It seems that the daemon doesn't survive the spawning-process crash. How can I fix watchdog.py to close all the script sessions if the automation dies (as shown in the example above)?
A watchdog.py
log that illustrates the problem (sadly, the PIDs do not coincide with the original question)...
[mpenning@Hotcoffee ~]$ cat watchdog.log
2012-02-22,15:17:20.356313 Start watchdog.watch_process
2012-02-22,15:17:20.356541 observe pid = 31339
2012-02-22,15:17:20.356643 kill pids = 31352,31356
2012-02-22,15:17:20.356730 seconds = 2
[mpenning@Hotcoffee ~]$
The problem essentially was a race condition. When I tried to kill the "parent" script
processes, they had already died coincidental with the automation event...
To solve the problem... first, the watchdog daemon needed to identify the entire list of children to be killed before polling the observed PID (my original script attempted to identify children after the observed PID crashed). Next, I had to modify my watchdog daemon to allow for the possibility that some script
processes might die with the observed PID.
watchdog.py:
#!/usr/bin/python
"""
Implement a cross-platform watchdog daemon, which observes a PID and kills
other PIDs if the observed PID dies.
Example:
--------
watchdog.py -o 29322 -k 29345,29346,29348 -s 2
The command checks PID 29322 every 2 seconds and kills PIDs 29345, 29346, 29348
and their children, if PID 29322 dies.
Requires:
----------
* https://github.com/giampaolo/psutil
* http://pypi.python.org/pypi/python-daemon
"""
from optparse import OptionParser
import datetime as dt
import signal
import daemon
import logging
import psutil
import time
import sys
import os
class MyFormatter(logging.Formatter):
converter=dt.datetime.fromtimestamp
def formatTime(self, record, datefmt=None):
ct = self.converter(record.created)
if datefmt:
s = ct.strftime(datefmt)
else:
t = ct.strftime("%Y-%m-%d %H:%M:%S")
s = "%s,%03d" % (t, record.msecs)
return s
def check_pid(pid):
""" Check For the existence of a unix / windows pid."""
try:
os.kill(pid, 0) # Kill 0 raises OSError, if pid isn't there...
except OSError:
return False
else:
return True
def kill_process(logger, pid):
try:
psu_proc = psutil.Process(pid)
except Exception, e:
logger.debug('Caught Exception ["%s"] while looking up PID %s' % (e, pid))
return False
logger.debug('Sending SIGTERM to %s' % repr(psu_proc))
psu_proc.send_signal(signal.SIGTERM)
psu_proc.wait(timeout=None)
return True
def watch_process(observe, kill, seconds=2):
"""Kill the process IDs listed in 'kill', when 'observe' dies."""
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
logfile = logging.FileHandler('%s/watchdog.log' % os.getcwd())
logger.addHandler(logfile)
formatter = MyFormatter(fmt='%(asctime)s %(message)s',datefmt='%Y-%m-%d,%H:%M:%S.%f')
logfile.setFormatter(formatter)
logger.debug('Start watchdog.watch_process')
logger.debug(' observe pid = %s' % observe)
logger.debug(' kill pids = %s' % kill)
logger.debug(' seconds = %s' % seconds)
children = list()
# Get PIDs of all child processes...
for childpid in kill.split(','):
children.append(childpid)
p = psutil.Process(int(childpid))
for subpsu in p.get_children():
children.append(str(subpsu.pid))
# Poll observed PID...
while check_pid(int(observe)):
logger.debug('Poll PID: %s is alive.' % observe)
time.sleep(seconds)
logger.debug('Poll PID: %s is *dead*, starting kills of %s' % (observe, ', '.join(children)))
for pid in children:
# kill all child processes...
kill_process(logger, int(pid))
sys.exit(0) # Exit gracefully
def run(observe, kill, seconds):
with daemon.DaemonContext(detach_process=True,
stdout=sys.stdout,
working_directory=os.getcwd()):
watch_process(observe=observe, kill=kill, seconds=seconds)
if __name__=='__main__':
parser = OptionParser()
parser.add_option("-o", "--observe", dest="observe", type="int",
help="PID to be observed", metavar="INT")
parser.add_option("-k", "--kill", dest="kill",
help="Comma separated list of PIDs to be killed",
metavar="TEXT")
parser.add_option("-s", "--seconds", dest="seconds", default=2, type="int",
help="Seconds to wait between observations (default = 2)",
metavar="INT")
(options, args) = parser.parse_args()
run(options.observe, options.kill, options.seconds)
Your problem is that script not detached from automation script after spawning, so it's working as child, and when parent dies it stay unmanageable.
To handle python script exit you can use atexit module. To monitor child processes exit you can use os.wait or handle SIGCHLD signal
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With