When you download a file with urllib or urllib2, you can find out whether a charset header was transmitted:
fp = urllib2.urlopen(request)
charset = fp.headers.getparam('charset')
You can use BeautifulSoup to locate a meta element in the HTML:
soup = BeatifulSoup.BeautifulSoup(data)
meta = soup.findAll('meta', {'http-equiv':lambda v:v.lower()=='content-type'})
If neither is available, browsers typically fall back to user configuration, combined with auto-detection. As rajax proposes, you could use the chardet module. If you have user configuration available telling you that the page should be Chinese (say), you may be able to do better.
Use the Universal Encoding Detector:
>>> import chardet
>>> chardet.detect(urlread("http://google.cn/"))
{'encoding': 'GB2312', 'confidence': 0.99}
The other option would be to just use wget:
import os
h = os.popen('wget -q -O foo1.txt http://foo.html')
h.close()
s = open('foo1.txt').read()
It seems like you need a hybrid of the answers presented:
<meta>
tags using beautiful soup or other methodI honestly don't believe you're going to find anything better than that.
In fact if you read further into the FAQ you linked to in the comments on the other answer, that's what the author of detector library advocates.
If you believe the FAQ, this is what the browsers do (as requested in your original question) as the detector is a port of the firefox sniffing code.
I would use html5lib for this.
Scrapy downloads a page and detects a correct encoding for it, unlike requests.get(url).text or urlopen. To do so it tries to follow browser-like rules - this is the best one can do, because website owners have incentive to make their websites work in a browser. Scrapy needs to take HTTP headers, <meta>
tags, BOM marks and differences in encoding names in account.
Content-based guessing (chardet, UnicodeDammit) on its own is not a correct solution, as it may fail; it should be only used as a last resort when headers or <meta>
or BOM marks are not available or provide no information.
You don't have to use Scrapy to get its encoding detection functions; they are released (among with some other stuff) in a separate library called w3lib: https://github.com/scrapy/w3lib.
To get page encoding and unicode body use w3lib.encoding.html_to_unicode function, with a content-based guessing fallback:
import chardet
from w3lib.encoding import html_to_unicode
def _guess_encoding(data):
return chardet.detect(data).get('encoding')
detected_encoding, html_content_unicode = html_to_unicode(
content_type_header,
html_content_bytes,
default_encoding='utf8',
auto_detect_fun=_guess_encoding,
)
instead of trying to get a page then figuring out the charset the browser would use, why not just use a browser to fetch the page and check what charset it uses..
from win32com.client import DispatchWithEvents
import threading
stopEvent=threading.Event()
class EventHandler(object):
def OnDownloadBegin(self):
pass
def waitUntilReady(ie):
"""
copypasted from
http://mail.python.org/pipermail/python-win32/2004-June/002040.html
"""
if ie.ReadyState!=4:
while 1:
print "waiting"
pythoncom.PumpWaitingMessages()
stopEvent.wait(.2)
if stopEvent.isSet() or ie.ReadyState==4:
stopEvent.clear()
break;
ie = DispatchWithEvents("InternetExplorer.Application", EventHandler)
ie.Visible = 0
ie.Navigate('http://kskky.info')
waitUntilReady(ie)
d = ie.Document
print d.CharSet
BeautifulSoup dose this with UnicodeDammit : Unicode, Dammit
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With