I'm writing a script(multi-threaded) to retrieve contents from a website, and the site's not very stable so every now and then there's hanging http request which cannot even be time-outed by socket.setdefaulttimeout()
. Since I have no control over that website, the only thing I can do is to improve my codes but I'm running out of ideas right now.
Sample codes:
socket.setdefaulttimeout(150)
MechBrowser = mechanize.Browser()
Header = {'User-Agent': 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.8) Gecko/20100722 Firefox/3.6.8 GTB7.1 (.NET CLR 3.5.30729)'}
Url = "http://example.com"
Data = "Justatest=whatever&letstry=doit"
Request = urllib2.Request(Url, Data, Header)
Response = MechBrowser.open(Request)
Response.close()
What should I do to force the hanging request to quit? Actually I want to know why socket.setdefaulttimeout(150)
is not working in the first place. Anybody can help me out?
Added:(and yes problem still not solved)
OK, I've followed tomasz's suggestion and changed codes to MechBrowser.open(Request, timeout = 60)
, but same thing happens. I still got hanging requests randomly till now, sometimes it's several hours and other times it could be several days. What do I do now? Is there a way to force these hanging requests to quit?
How do I increase the socket timeout in Python? Then, you can get the socket timeout value by calling gettimeout() and alter the value by calling the settimeout() method. The timeout value passed to the settimeout() method can be in seconds (non-negative float) or None .
A new Python socket by default doesn't have a timeout. Its timeout defaults to None. Not setting the connection timeout parameter can result in blocking socket mode. In blocking mode, operations block until complete or the system returns an error.
As for the problem with timeout, all you need to do is to change except socket. Timeouterror: to except timeout: , since timeout class is defined inside socket module and you have imported all its members to your namespace. Save this answer.
Socket timeouts can occur when attempting to connect to a remote server, or during communication, especially long-lived ones. They can be caused by any connectivity problem on the network, such as: A network partition preventing the two machines from communicating. The remote machine crashing.
While socket.setsocketimeout
will set the default timeout for new sockets, if you're not using the sockets directly, the setting can be easily overwritten. In particular, if the library calls socket.setblocking
on its socket, it'll reset the timeout.
urllib2.open
has a timeout argument, hovewer, there is no timeout in urllib2.Request
. As you're using mechanize
, you should refer to their documentation:
Since Python 2.6, urllib2 uses a .timeout attribute on Request objects internally. However, urllib2.Request has no timeout constructor argument, and urllib2.urlopen() ignores this parameter. mechanize.Request has a timeout constructor argument which is used to set the attribute of the same name, and mechanize.urlopen() does not ignore the timeout attribute.
source: http://wwwsearch.sourceforge.net/mechanize/documentation.html
---EDIT---
If either socket.setsockettimeout
or passing timeout to mechanize
works with small values, but not with higher, the source of the problem might be completely different. One thing is your library may open multiple connections (here credit to @Cédric Julien), so the timeout apply to every single attempt of socket.open and if it doesn't stop with first failure – can take up to timeout * num_of_conn
seconds. The other thing is socket.recv
: if the connection is really slow and you're unlucky enough, the whole request can take up to timeout * incoming_bytes
as with every socket.recv
we could get one byte, and every such call could take timeout
seconds. As you're unlikely to suffer from exactly this dark scenerio (one byte per timeout seconds? you would have to be a very rude boy), it's very likely request to take ages for very slow connections and very high timeouts.
The only solution you have is to force timeout for the whole request, but there's nothing to do with sockets here. If you're on Unix, you can use simple solution with ALARM
signal. You set the signal to be raised in timeout
seconds, and your request will be terminated (don't forget to catch it). You might like to use with
statement to make it clean and easy for use, example:
import signal, time
def request(arg):
"""Your http request"""
time.sleep(2)
return arg
class Timeout():
"""Timeout class using ALARM signal"""
class Timeout(Exception): pass
def __init__(self, sec):
self.sec = sec
def __enter__(self):
signal.signal(signal.SIGALRM, self.raise_timeout)
signal.alarm(self.sec)
def __exit__(self, *args):
signal.alarm(0) # disable alarm
def raise_timeout(self, *args):
raise Timeout.Timeout()
# Run block of code with timeouts
try:
with Timeout(3):
print request("Request 1")
with Timeout(1):
print request("Request 2")
except Timeout.Timeout:
print "Timeout"
# Prints "Request 1" and "Timeout"
If want to be more portable than this, you have to use some bigger guns, for example multiprocessing
, so you'll spawn a process to call your request and terminate it if overdue. As this would be a separate process, you have to use something to transfer the result back to your application, it might be multiprocessing.Pipe
. Here comes the example:
from multiprocessing import Process, Pipe
import time
def request(sleep, result):
"""Your http request example"""
time.sleep(sleep)
return result
class TimeoutWrapper():
"""Timeout wrapper using separate process"""
def __init__(self, func, timeout):
self.func = func
self.timeout = timeout
def __call__(self, *args, **kargs):
"""Run func with timeout"""
def pmain(pipe, func, args, kargs):
"""Function to be called in separate process"""
result = func(*args, **kargs) # call func with passed arguments
pipe.send(result) # send result to pipe
parent_pipe, child_pipe = Pipe() # Pipe for retrieving result of func
p = Process(target=pmain, args=(child_pipe, self.func, args, kargs))
p.start()
p.join(self.timeout) # wait for prcoess to end
if p.is_alive():
p.terminate() # Timeout, kill
return None # or raise exception if None is acceptable result
else:
return parent_pipe.recv() # OK, get result
print TimeoutWrapper(request, 3)(1, "OK") # prints OK
print TimeoutWrapper(request, 1)(2, "Timeout") # prints None
You really don't have much choice if you want to force the request to terminate after fixed number of seconds. socket.timeout
will provide timeout for single socket operation (connect/recv/send), but if you have multiple of them you can suffer from very long execution time.
From their documentation:
Since Python 2.6, urllib2 uses a .timeout attribute on Request objects internally. However, urllib2.Request has no timeout constructor argument, and urllib2.urlopen() ignores this parameter. mechanize.Request has a timeout constructor argument which is used to set the attribute of the same name, and mechanize.urlopen() does not ignore the timeout attribute.
Perhaps you should try replacing urllib2.Request with mechanize.Request.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With