I'm working with the Gnuradio framework. I handle flowgraphs I generate to send/receive signals. These flowgraphs initialize and start, but they don't return the control flow to my application:
I imported time
while time.time() < endtime:
# invoke GRC flowgraph for 1st sequence
if not seq1_sent:
tb = send_seq_2.top_block()
tb.Run(True)
seq1_sent = True
if time.time() < endtime:
break
# invoke GRC flowgraph for 2nd sequence
if not seq2_sent:
tb = send_seq_2.top_block()
tb.Run(True)
seq2_sent = True
if time.time() < endtime:
break
The problem is: only the first if statement invokes the flow-graph (that interacts with the hardware). I'm stuck in this. I could use a Thread, but I'm unexperienced how to timeout threads in Python. I doubt that this is possible, because it seems killing threads isn't within the APIs. This script only has to work on Linux...
How do you handle blocking functions with Python properly - without killing the whole program. Another more concrete example for this problem is:
import signal, os
def handler(signum, frame):
# print 'Signal handler called with signal', signum
#raise IOError("Couldn't open device!")
import time
print "wait"
time.sleep(3)
def foo():
# Set the signal handler and a 5-second alarm
signal.signal(signal.SIGALRM, handler)
signal.alarm(3)
# This open() may hang indefinitely
fd = os.open('/dev/ttys0', os.O_RDWR)
signal.alarm(0) # Disable the alarm
foo()
print "hallo"
How do I still get print "hallo"
. ;)
Thanks, Marius
First of all - the use of signals should be avoided at all cost:
1) It may lead to a deadlock. SIGALRM may reach the process BEFORE the blocking syscall (imagine super-high load in the system!) and the syscall will not be interrupted. Deadlock.
2) Playing with signals may have some nasty non-local consequences. For example, syscalls in other threads may be interrupted which usually is not what you want. Normally syscalls are restarted when (not a deadly) signal is received. When you set up a signal handler it automatically turns off this behavior for the whole process, or thread group so to say. Check 'man siginterrupt' on that.
Believe me - I met two problems before and they are not fun at all.
In some cases the blocking can be avoided explicitely - I strongly recommend using select() and friends (check select module in Python) to handle blocking writes and reads. This will not solve blocking open() call, though.
For that I've tested this solution and it works well for named pipes. It opens in a non-blocking way, then turns it off and uses select() call to eventually timeout if nothing is available.
import sys, os, select, fcntl
f = os.open(sys.argv[1], os.O_RDONLY | os.O_NONBLOCK)
flags = fcntl.fcntl(f, fcntl.F_GETFL, 0)
fcntl.fcntl(f, fcntl.F_SETFL, flags & ~os.O_NONBLOCK)
r, w, e = select.select([f], [], [], 2.0)
if r == [f]:
print 'ready'
print os.read(f, 100)
else:
print 'unready'
os.close(f)
Test this with:
mkfifo /tmp/fifo
python <code_above.py> /tmp/fifo (1st terminal)
echo abcd > /tmp/fifo (2nd terminal)
With some additional effort select() call can be used as a main loop of the whole program, aggregating all events - you can use libev or libevent, or some Python wrappers around them.
When you can't explicitely force non-blocking behavior, say you just use an external library, then it's going to be much harder. Threads may do, but obviously it is not a state-of-the-art solution, usually being just wrong.
I'm afraid that in general you can't solve this in a robust way - it really depends on WHAT you block.
IIUC, each top_block has a stop method. So you actually can run the top_block in a thread, and issue a stop if the timeout has arrived. It would be better if the top_block's wait() also had a timeout, but alas, it doesn't.
In the main thread, you then need to wait for two cases: a) the top_block completes, and b) the timeout expires. Busy-waits are evil :-), so you should use the thread's join-with-timeout to wait for the thread. If the thread is still alive after the join, you need to stop the top_run.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With