This is somehow related to my question here.
I process tons of texts (in HTML and XML mainly) fetched via HTTP. I'm looking for a library in python that can do smart encoding detection based on different strategies and convert texts to unicode using best possible character encoding guess.
I found that chardet does auto-detection extremely well. However auto-detecting everything is the problem because it is SLOW and very much against all standards. As per chardet
FAQ I don't want to screw the standards.
From the same FAQ here is the list of places where I want to look for encoding:
Content-type
header.<meta http-equiv="content-type">
element in
the <head>
of a web page for HTML
documents.Basically I want to be able to look in all those place and also deal with conflicting information automatically.
Is there such library out there or do I need to write it myself?
BeautifulSoup's UnicodeDammit, which in turn uses chardet.
chardet
by itself is quite useful for the general case (determining text's encoding) but slow as you say. UnicodeDammit
adds extra features on top of chardet
, in particular that it can look up the encoding explicitly specified in XML's encoding tags.
As for the HTTP Content-type
header, I think you need to read that yourself to extract the charset
parameter, and then pass it to UnicodeDammit
in the fromEncoding
parameter.
As for resolving conflicts, UnicodeDammit will give precedence to explicitly-stated encoding (if the encoding doesn't generate errors). See the docs for full details.
BeautifulSoup (the html parser) incorporates a class called UnicodeDammit that does just that. Have a look and see if you like it.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With