I am using urlopen from the Python 2.7.1 urllib2 package to do a HTTP POST form a Windows XP machine to a remote Apache webserver (for instance the built-in web sharing of Mac OS X). The sent data contains some identifier, data and a checksum, if all data is sent the server responds with an acknowledgement. The checksum in the data can be used to check if everything arrived in fine order.
Usually this works great, however sometimes the internet connection is bad, often because the client sending the data uses a wifi or 3G connection. This results in internet connection loss for some arbitrary amount time. urlopen contains a timeout option, to make sure that this does not block your program and it can continue.
This is what I want, but the problem is that urlopen does not stop the socket from continuing to send whatever data it still had to send when the timeout occurred. I have tested this (with the code that I will show below) by trying to send a large bit of data to my laptop, I would see the network activity on both show activity, I'd then stop the wireless on the laptop, wait until the function times out, and then reactivate the wireless, and the data transfer would then continue, but the program will not be listening for responses anymore. I even tried to exit the Python interpreter and it would still send data, so control of that is handed over to Windows somehow.
The timeout (as I understand it) works like this:
It checks for an 'idle response time'
( [Python-Dev] Adding socket timeout to urllib2 )
If you set the timeout to 3, it will open the connection, start a counter, then try to send the data and wait for a response, if at any point before receiving the response the timer runs out a timeout exception is called. Note that the sending of the data does not seem to count as 'activity' a far as the timeout timer is concerned.
( urllib2 times out but doesn't close socket connection )
( Close urllib2 connection )
Apparently it is somewhere stated that when a socket is closed/dereferenced/garbage collected it calls its 'close' function which waits for all data to be sent before closing the socket. However there is also a shutdown function, which should stop the socket immediately, preventing any more data to be sent.
( socket.shutdown vs socket.close )
( http://docs.python.org/library/socket.html#socket.socket.close )
I want the connection to be 'shutdown' when a timeout occurs. Otherwise my client will not be able to tell if the data was received properly or not and it might try to send it again. I'd rather just kill the connection and try again later, knowing that the data was (probably) not send successfully (the server can recognize this if the checksum does not match).
Here is part of the code that I used to test this. The try..except parts do not yet work as I'd expect, any help there is also appreciated. As I said before I want the program to shutdown the socket as soon as the timeout (or any other) exception is raised.
from urllib import urlencode
from urllib2 import urlopen, HTTPError, URLError
import socket
import sys
class Uploader:
def __init__(self):
self.URL = "http://.../"
self.data = urlencode({'fakerange':range(0,2000000,1)})
print "Data Generated"
def upload(self):
try:
f = urlopen(self.URL, self.data, timeout=10)
returncode = f.read()
except (URLError, HTTPError), msg:
returncode = str(msg)
except socket.error:
returncode = "Socket Timeout!"
else:
returncode = 'Im here'
def main():
upobj = Uploader()
returncode = upobj.upload()
if returncode == '100':
print "Success!"
else:
print "Maybe a Fail"
print returncode
print "The End"
if __name__ == '__main__':
main()
I found some code that might help you on this thread:
from urllib2 import urlopen
from threading import Timer
url = "http://www.python.org"
def handler(fh):
fh.close()
fh = urlopen(url)
t = Timer(20.0, handler,[fh])
t.start()
data = fh.read()
t.cancel()
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With