Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

URL works fine from browser or wget, but comes up empty from Python or cURL

When I try to open http://www.comicbookdb.com/browse.php (which works fine in my browser) from Python, I get an empty response:

>>> import urllib.request
>>> content = urllib.request.urlopen('http://www.comicbookdb.com/browse.php')
>>> print(content.read())
b''

The same also happens when I set a User-agent:

>>> opener = urllib.request.build_opener()
>>> opener.addheaders = [('User-agent', 'Mozilla/5.0')]
>>> content = opener.open('http://www.comicbookdb.com/browse.php')
>>> print(content.read())
b''

Or when I use httplib2 instead:

>>> import httplib2
>>> h = httplib2.Http('.cache')
>>> response, content = h.request('http://www.comicbookdb.com/browse.php')
>>> print(content)
b''
>>> print(response)
{'cache-control': 'no-store, no-cache, must-revalidate, post-check=0, pre-check=0', 'content-location': 'http://www.comicbookdb.com/browse.php', 'expires': 'Thu, 19 Nov 1981 08:52:00 GMT', 'content-length': '0', 'set-cookie': 'PHPSESSID=590f5997a91712b7134c2cb3291304a8; path=/', 'date': 'Wed, 25 Dec 2013 15:12:30 GMT', 'server': 'Apache', 'pragma': 'no-cache', 'content-type': 'text/html', 'status': '200'}

Or when I try to download it using cURL:

C:\>curl -v http://www.comicbookdb.com/browse.php
* About to connect() to www.comicbookdb.com port 80
*   Trying 208.76.81.137... * connected
* Connected to www.comicbookdb.com (208.76.81.137) port 80
> GET /browse.php HTTP/1.1
User-Agent: curl/7.13.1 (i586-pc-mingw32msvc) libcurl/7.13.1 zlib/1.2.2
Host: www.comicbookdb.com
Pragma: no-cache
Accept: */*

< HTTP/1.1 200 OK
< Date: Wed, 25 Dec 2013 15:20:06 GMT
< Server: Apache
< Expires: Thu, 19 Nov 1981 08:52:00 GMT
< Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
< Pragma: no-cache
< Set-Cookie: PHPSESSID=0a46f2d390639da7eb223ad47380b394; path=/
< Content-Length: 0
< Content-Type: text/html
* Connection #0 to host www.comicbookdb.com left intact
* Closing connection #0

Opening the URL in a browser or downloading it with Wget seems to work fine, though:

C:\>wget http://www.comicbookdb.com/browse.php
--16:16:26--  http://www.comicbookdb.com/browse.php
           => `browse.php'
Resolving www.comicbookdb.com... 208.76.81.137
Connecting to www.comicbookdb.com[208.76.81.137]:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]

    [    <=>                              ] 40,687        48.75K/s

16:16:27 (48.75 KB/s) - `browse.php' saved [40687]

As does downloading a different file from the same server:

>>> content = urllib.request.urlopen('http://www.comicbookdb.com/index.php')
>>> print(content.read(100))
b'<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"\n\t\t"http://www.w3.org/TR/1999/REC-html'

So why doesn't the other URL work?

like image 551
Anym Avatar asked Dec 25 '13 15:12

Anym


1 Answers

It seems the server expects a Connection: keep-alive header, which e.g. curl (and I expect the other failing clients too) do not add by default.

With curl you can use this command, which will display a non-empty response:

curl -v -H 'Connection: keep-alive' http://www.comicbookdb.com/browse.php

With Python you can use this code:

import httplib2
h = httplib2.Http('.cache')
response, content = h.request('http://www.comicbookdb.com/browse.php', headers={'Connection':'keep-alive'})
print(content)
print(response)
like image 189
Nicolas Cortot Avatar answered Oct 12 '22 11:10

Nicolas Cortot