Still working my way round python whenever work permits it...
I'm querying a load of internal webUI's using a script that uses urllib2.urlopen
. I'm wondering how it is possible to get the size of the page content from each request. I can't seem to figure this one out.
Thanks in advance,
MHibbin
The problem here is that urlopen returns a reference to a file object from which you should retrieve HTML. Please note that urllib. urlopen function is marked as deprecated since python 2.6. It's recommended to use urllib2.
Urllib package is the URL handling module for python. It is used to fetch URLs (Uniform Resource Locators). It uses the urlopen function and is able to fetch URLs using a variety of different protocols.
request is a Python module for fetching URLs (Uniform Resource Locators). It offers a very simple interface, in the form of the urlopen function. This is capable of fetching URLs using a variety of different protocols.
urllib2 is a Python module that can be used for fetching URLs. It defines functions and classes to help with URL actions (basic and digest. authentication, redirections, cookies, etc) The magic starts with importing the urllib2 module.
print len(urlopen(url).read())
or
>>> result = urllib2.urlopen('http://www.spiegel.de')
>>> result.headers['content-length']
'181291'
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With