backoff_factor (float) – A backoff factor to apply between attempts after the second try (most errors are resolved immediately by a second try without a delay).
Requests will allow you to send HTTP/1.1 requests using Python. With it, you can add content like headers, form data, multipart files, and parameters via simple Python libraries. It also allows you to access the response data of Python in the same way.
This will not only change the max_retries but also enable a backoff strategy which makes requests to all http:// addresses sleep for a period of time before retrying (to a total of 5 times):
import requests
from urllib3.util.retry import Retry
from requests.adapters import HTTPAdapter
s = requests.Session()
retries = Retry(total=5,
backoff_factor=0.1,
status_forcelist=[ 500, 502, 503, 504 ])
s.mount('http://', HTTPAdapter(max_retries=retries))
s.get('http://httpstat.us/500')
As per documentation for Retry
: if the backoff_factor is 0.1, then sleep() will sleep for [0.05s, 0.1s, 0.2s, 0.4s, ...] between retries. It will also force a retry if the status code returned is 500, 502, 503 or 504.
Various other options to Retry
allow for more granular control:
MaxRetryError
, or to return a response with a response code in the 3xx range.NB: raise_on_status is relatively new, and has not made it into a release of urllib3 or requests yet. The raise_on_status keyword argument appears to have made it into the standard library at most in python version 3.6.
To make requests retry on specific HTTP status codes, use status_forcelist. For example, status_forcelist=[503] will retry on status code 503 (service unavailable).
By default, the retry only fires for these conditions:
TimeoutError
HTTPException
raised (from http.client in Python 3 else httplib).
This seems to be low-level HTTP exceptions, like URL or protocol not
formed correctly.SocketError
ProtocolError
Notice that these are all exceptions that prevent a regular HTTP response from being received. If any regular response is generated, no retry is done. Without using the status_forcelist, even a response with status 500 will not be retried.
To make it behave in a manner which is more intuitive for working with a remote API or web server, I would use the above code snippet, which forces retries on statuses 500, 502, 503 and 504, all of which are not uncommon on the web and (possibly) recoverable given a big enough backoff period.
EDITED: Import Retry
class directly from urllib3.
It is the underlying urllib3
library that does the retrying. To set a different maximum retry count, use alternative transport adapters:
from requests.adapters import HTTPAdapter
s = requests.Session()
s.mount('http://stackoverflow.com', HTTPAdapter(max_retries=5))
The max_retries
argument takes an integer or a Retry()
object; the latter gives you fine-grained control over what kinds of failures are retried (an integer value is turned into a Retry()
instance which only handles connection failures; errors after a connection is made are by default not handled as these could lead to side-effects).
Old answer, predating the release of requests 1.2.1:
The requests
library doesn't really make this configurable, nor does it intend to (see this pull request). Currently (requests 1.1), the retries count is set to 0. If you really want to set it to a higher value, you'll have to set this globally:
import requests
requests.adapters.DEFAULT_RETRIES = 5
This constant is not documented; use it at your own peril as future releases could change how this is handled.
Update: and this did change; in version 1.2.1 the option to set the max_retries
parameter on the HTTPAdapter()
class was added, so that now you have to use alternative transport adapters, see above. The monkey-patch approach no longer works, unless you also patch the HTTPAdapter.__init__()
defaults (very much not recommended).
Be careful, Martijn Pieters's answer isn't suitable for version 1.2.1+. You can't set it globally without patching the library.
You can do this instead:
import requests
from requests.adapters import HTTPAdapter
s = requests.Session()
s.mount('http://www.github.com', HTTPAdapter(max_retries=5))
s.mount('https://www.github.com', HTTPAdapter(max_retries=5))
After struggling a bit with some of the answers here, I found a library called backoff that worked better for my situation. A basic example:
import backoff
@backoff.on_exception(
backoff.expo,
requests.exceptions.RequestException,
max_tries=5,
giveup=lambda e: e.response is not None and e.response.status_code < 500
)
def publish(self, data):
r = requests.post(url, timeout=10, json=data)
r.raise_for_status()
I'd still recommend giving the library's native functionality a shot, but if you run into any problems or need broader control, backoff is an option.
A cleaner way to gain higher control might be to package the retry stuff into a function and make that function retriable using a decorator and whitelist the exceptions.
I have created the same here: http://www.praddy.in/retry-decorator-whitelisted-exceptions/
Reproducing the code in that link :
def retry(exceptions, delay=0, times=2):
"""
A decorator for retrying a function call with a specified delay in case of a set of exceptions
Parameter List
-------------
:param exceptions: A tuple of all exceptions that need to be caught for retry
e.g. retry(exception_list = (Timeout, Readtimeout))
:param delay: Amount of delay (seconds) needed between successive retries.
:param times: no of times the function should be retried
"""
def outer_wrapper(function):
@functools.wraps(function)
def inner_wrapper(*args, **kwargs):
final_excep = None
for counter in xrange(times):
if counter > 0:
time.sleep(delay)
final_excep = None
try:
value = function(*args, **kwargs)
return value
except (exceptions) as e:
final_excep = e
pass #or log it
if final_excep is not None:
raise final_excep
return inner_wrapper
return outer_wrapper
@retry(exceptions=(TimeoutError, ConnectTimeoutError), delay=0, times=3)
def call_api():
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With