why is the content-lenght different in case of using requests
and urlopen(url).info()
>>> url = 'http://pymotw.com/2/urllib/index.html'
>>> requests.head(url).headers.get('content-length', None)
'8176'
>>> urllib.urlopen(url).info()['content-length']
'38227'
>>> len(requests.get(url).content)
38274
I was going to make a check for size of file in bytes to split the buffer to multiple threads based on Range
in urllib2
but if I do not have the actual size of file in bytes it won't work..
only len(requests.get(url).content)
gives 38274
which is closest but still not correct and moreover it is downloading the content which i didn't wanted.
By default, requests will send 'Accept-Encoding': 'gzip'
as part of the request headers, and the server will respond with the compressed content:
>>> r = requests.head('http://pymotw.com/2/urllib/index.html')
r>>> r.headers['content-encoding'], r.headers['content-length']
('gzip', '8201')
But, if you manually set the request headers, then you'll get the uncompressed content:
>>> r = requests.head('http://pymotw.com/2/urllib/index.html',headers={'Accept-Encoding': 'identity'})
>>> r.headers['content-length']
'38227'
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With