I have just started using urllib3, and I am running into a problem straightaway. According to their manuals, I started off with the simple example:
Python 2.7.1+ (r271:86832, Apr 11 2011, 18:13:53)
[GCC 4.5.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import urllib3
>>>
>>> http = urllib3.PoolManager()
>>> r = http.request('GET', 'http://google.com/')
I get thrown the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/urllib3/request.py", line 65, in request
**urlopen_kw)
File "/usr/local/lib/python2.7/dist-packages/urllib3/request.py", line 78, in request_encode_url
return self.urlopen(method, url, **urlopen_kw)
File "/usr/local/lib/python2.7/dist-packages/urllib3/poolmanager.py", line 113, in urlopen
return self.urlopen(method, e.new_url, **kw)
File "/usr/local/lib/python2.7/dist-packages/urllib3/poolmanager.py", line 113, in urlopen
return self.urlopen(method, e.new_url, **kw)
File "/usr/local/lib/python2.7/dist-packages/urllib3/poolmanager.py", line 113, in urlopen
return self.urlopen(method, e.new_url, **kw)
File "/usr/local/lib/python2.7/dist-packages/urllib3/poolmanager.py", line 113, in urlopen
return self.urlopen(method, e.new_url, **kw)
File "/usr/local/lib/python2.7/dist-packages/urllib3/poolmanager.py", line 109, in urlopen
return conn.urlopen(method, url, **kw)
File "/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 309, in urlopen
raise MaxRetryError(url)
urllib3.exceptions.MaxRetryError: Max retries exceeded for url: http://google.com/
Any clues as to why this happens? Many thanks.
The urllib3 module is a powerful, sanity-friendly HTTP client for Python. It supports thread safety, connection pooling, client-side SSL/TLS verification, file uploads with multipart encoding, helpers for retrying requests and dealing with HTTP redirects, gzip and deflate encoding, and proxy for HTTP and SOCKS.
The PoolManager class automatically handles creating ConnectionPool instances for each host as needed. By default, it will keep a maximum of 10 ConnectionPool instances. If you're making requests to many different hosts it might improve performance to increase this number: >>> import urllib3 >>> http = urllib3.
This is a known bug which has been fixed in the master branch:
I really should have published a bugfix release last weekend with this fix, but I ran out of time. The next release this coming weekend should include this fix (and a bunch of other cool improvements). Sorry for the troubles!
Update: urllib3 v1.2 is now on PyPI which includes this fix and more. :)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With