According to this answer: urllib2 read to Unicode
I have to get the content-type in order to change to Unicode. However, some websites don't have a "charset".
For example, the ['content-type'] for this page is "text/html". I can't convert it to Unicode.
encoding=urlResponse.headers['content-type'].split('charset=')[-1]
htmlSource = unicode(htmlSource, encoding)
TypeError: 'int' object is not callable
Is there a default "encoding" (English, of course)...so that if nothing is found, I can just use that?
Is there a default "encoding" (English, of course)...so that if nothing is found, I can just use that?
No, there isn't. You must guess.
Trivial approach: try and decode as UTF-8
. If it works, great, it's probably UTF-8. If it doesn't, choose a most-likely encoding for the kinds of pages you're browsing. For English pages that's cp1252
, the Windows Western European encoding. (Which is like ISO-8859-1; in fact most browsers will use cp1252
instead of iso-8859-1
even if you specify that charset, so it's worth duplicating that behaviour.)
If you need to guess other languages, it gets very hairy. There are existing modules to help you guess in these situations. See eg. chardet.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With