I have this string that has been decoded from Quoted-printable to ISO-8859-1 with the email module. This gives me strings like "\xC4pple" which would correspond to "Äpple" (Apple in Swedish). However, I can't convert those strings to UTF-8.
>>> apple = "\xC4pple" >>> apple '\xc4pple' >>> apple.encode("UTF-8") Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeDecodeError: 'ascii' codec can't decode byte 0xc4 in position 0: ordinal not in range(128)
What should I do?
what is the difference between utf8 and latin1? They are different encodings (with some characters mapped to common byte sequences, e.g. the ASCII characters and many accented letters). UTF-8 is one encoding of Unicode with all its codepoints; Latin1 encodes less than 256 characters. Does this answer your question?
UTF-8 is a multibyte encoding that can represent any Unicode character. ISO 8859-1 is a single-byte encoding that can represent the first 256 Unicode characters. Both encode ASCII exactly the same way.
The latin-1 encoding in Python implements ISO_8859-1:1987 which maps all possible byte values to the first 256 Unicode code points, and thus ensures decoding errors will never occur regardless of the configured error handler.
This is a common problem, so here's a relatively thorough illustration.
For non-unicode strings (i.e. those without u
prefix like u'\xc4pple'
), one must decode from the native encoding (iso8859-1
/latin1
, unless modified with the enigmatic sys.setdefaultencoding
function) to unicode
, then encode to a character set that can display the characters you wish, in this case I'd recommend UTF-8
.
First, here is a handy utility function that'll help illuminate the patterns of Python 2.7 string and unicode:
>>> def tell_me_about(s): return (type(s), s)
>>> v = "\xC4pple" # iso-8859-1 aka latin1 encoded string >>> tell_me_about(v) (<type 'str'>, '\xc4pple') >>> v '\xc4pple' # representation in memory >>> print v ?pple # map the iso-8859-1 in-memory to iso-8859-1 chars # note that '\xc4' has no representation in iso-8859-1, # so is printed as "?".
>>> uv = v.decode("iso-8859-1") >>> uv u'\xc4pple' # decoding iso-8859-1 becomes unicode, in memory >>> tell_me_about(uv) (<type 'unicode'>, u'\xc4pple') >>> print v.decode("iso-8859-1") Äpple # convert unicode to the default character set # (utf-8, based on sys.stdout.encoding) >>> v.decode('iso-8859-1') == u'\xc4pple' True # one could have just used a unicode representation # from the start
>>> u"Ä" == u"\xc4" True # the native unicode char and escaped versions are the same >>> "Ä" == u"\xc4" False # the native unicode char is '\xc3\x84' in latin1 >>> "Ä".decode('utf8') == u"\xc4" True # one can decode the string to get unicode >>> "Ä" == "\xc4" False # the native character and the escaped string are # of course not equal ('\xc3\x84' != '\xc4').
>>> u8 = v.decode("iso-8859-1").encode("utf-8") >>> u8 '\xc3\x84pple' # convert iso-8859-1 to unicode to utf-8 >>> tell_me_about(u8) (<type 'str'>, '\xc3\x84pple') >>> u16 = v.decode('iso-8859-1').encode('utf-16') >>> tell_me_about(u16) (<type 'str'>, '\xff\xfe\xc4\x00p\x00p\x00l\x00e\x00') >>> tell_me_about(u8.decode('utf8')) (<type 'unicode'>, u'\xc4pple') >>> tell_me_about(u16.decode('utf16')) (<type 'unicode'>, u'\xc4pple')
>>> print u8 Äpple # printing utf-8 - because of the encoding we now know # how to print the characters >>> print u8.decode('utf-8') # printing unicode Äpple >>> print u16 # printing 'bytes' of u16 ���pple >>> print u16.decode('utf16') Äpple # printing unicode >>> v == u8 False # v is a iso8859-1 string; u8 is a utf-8 string >>> v.decode('iso8859-1') == u8 False # v.decode(...) returns unicode >>> u8.decode('utf-8') == v.decode('latin1') == u16.decode('utf-16') True # all decode to the same unicode memory representation # (latin1 is iso-8859-1)
>>> u8.encode('iso8859-1') Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 0: ordinal not in range(128) >>> u16.encode('iso8859-1') Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeDecodeError: 'ascii' codec can't decode byte 0xff in position 0: ordinal not in range(128) >>> v.encode('iso8859-1') Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeDecodeError: 'ascii' codec can't decode byte 0xc4 in position 0: ordinal not in range(128)
One would get around these by converting from the specific encoding (latin-1, utf8, utf16) to unicode e.g. u8.decode('utf8').encode('latin1')
.
So perhaps one could draw the following principles and generalizations:
str
is a set of bytes, which may have one of a number of encodings such as Latin-1, UTF-8, and UTF-16unicode
is a set of bytes that can be converted to any number of encodings, most commonly UTF-8 and latin-1 (iso8859-1)print
command has its own logic for encoding, set to sys.stdout.encoding
and defaulting to UTF-8str
to unicode before converting to another encoding.Of course, all of this changes in Python 3.x.
Hope that is illuminating.
And the very illustrative rants by Armin Ronacher:
Try decoding it first, then encoding:
apple.decode('iso-8859-1').encode('utf8')
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With