Code:
# callee.py
import signal
import sys
import time
def int_handler(*args):
for i in range(10):
print('INTERRUPT', args)
sys.exit()
if __name__ == '__main__':
signal.signal(signal.SIGINT, int_handler)
signal.signal(signal.SIGTERM, int_handler)
while 1:
time.sleep(1)
# caller.py
import subprocess
import sys
def wait_and_communicate(p):
out, err = p.communicate(timeout=1)
print('========out==========')
print(out.decode() if out else '')
print('========err==========')
print(err.decode() if err else '')
print('=====================')
if __name__ == '__main__':
p = subprocess.Popen(
['/usr/local/bin/python3', 'callee.py'],
stdout=sys.stdout,
stderr=subprocess.PIPE,
)
while 1:
try:
wait_and_communicate(p)
except KeyboardInterrupt:
p.terminate()
wait_and_communicate(p)
break
except subprocess.TimeoutExpired:
continue
Simply execute caller.py
and then press Ctrl+C
, the program will raise RuntimeError: reentrant call inside <_io.BufferedWriter name='<stdout>'>
randomly. From the documentation I learn that signal handlers are called asynchronously, and in this case two signals SIGINT(Ctrl+C
action) and SIGTERM(p.terminate()
) are sent nearly at the same time, causing a race condition.
However, from this post I learn that signal
module doesn't execute signal handler inside low-level (C) handler. Instead, it sets a flag, and the interpreter checks the flag between bytecode instructions and then invokes the python signal handler. In other words, while signal handlers may mess up the control flow in the main thread, a bytecode instruction is always atomic.
This seems to contradict with the result of my example program. As far as I am concerned, print
and the implicit _io.BufferedWriter
are both implemented in pure C, and thus calling print
function should consume only one bytecode instruction (CALL_FUNCTION
). I am confused: within one uninterrupted instruction on one thread, how can a function be reentrant?
I'm using Python 3.6.2.
You might prefer to inhibit delivery of SIGINT to the child, so there's no race, perhaps by putting it in a different process group, or by having it ignore the signal. Then only SIGTERM from the parent would matter.
To reveal where it was interrupted, use this:
sig_num, frame = args
print(dis.dis(frame.f_code.co_code))
print(frame.f_lasti)
The bytecode offsets in the left margin correspond to that last instruction executed offset.
Other items of interest include
frame.f_lineno
,
frame.f_code.co_filename
, and
frame.f_code.co_names
.
This issue becomes moot in python 3.7.3, which no longer exhibits the symptom.
Signals are processed between opscode(see eval_frame_handle_pending()
in python's opscode processor loop), but not limited to it. print
is a perfect example. It is implemented based on _io_BufferedWriter_write_impl(), which has a structure like
ENTER_BUFFERED()
=> it locks buffer
PyErr_CheckSignals()
=> it invoke signal handler
LEAVE_BUFFERED()
=> it unlocks buffer
by calling PyErr_CheckSignals()
, it invoke another signal handler, which has another print
in this case. The 2nd print
run ENTER_BUFFERED()
again, because the buffer is already locked by previous print
in 1st signal handler, so the reentrant
exception is thrown as below snippet shows.
// snippet of ENTER_BUFFERED
static int
_enter_buffered_busy(buffered *self)
{
int relax_locking;
PyLockStatus st;
if (self->owner == PyThread_get_thread_ident()) {
PyErr_Format(PyExc_RuntimeError,
"reentrant call inside %R", self);
return 0;
}
}
#define ENTER_BUFFERED(self) \
( (PyThread_acquire_lock(self->lock, 0) ? \
1 : _enter_buffered_busy(self)) \
&& (self->owner = PyThread_get_thread_ident(), 1) )
P.S.
Reentrant Functions from Advanced Programming in the Unix Environment.
The Single UNIX Specification specifies the functions that are guaranteed to be safe to call from within a signal handler. These functions are reentrant and are called async-signal safe. Most of the functions that are not reentrant because
print
in Python belongs to this category.If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With