Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why unicode() uses str() on my object only with no encoding given?

I start by creating a string variable with some non-ascii utf-8 encoded data on it:

>>> text = 'á'
>>> text
'\xc3\xa1'
>>> text.decode('utf-8')
u'\xe1'

Using unicode() on it raises errors...

>>> unicode(text)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 0: 
                    ordinal not in range(128)

...but if I know the encoding I can use it as second parameter:

>>> unicode(text, 'utf-8')
u'\xe1'
>>> unicode(text, 'utf-8') == text.decode('utf-8')
True

Now if I have a class that returns this text in the __str__() method:

>>> class ReturnsEncoded(object):
...     def __str__(self):
...         return text
... 
>>> r = ReturnsEncoded()
>>> str(r)
'\xc3\xa1'

unicode(r) seems to use str() on it, since it raises the same error as unicode(text) above:

>>> unicode(r)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 0: 
                    ordinal not in range(128)

Until now everything is as planned!

But as no one would ever expect, unicode(r, 'utf-8') won't even try:

>>> unicode(r, 'utf-8')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: coercing to Unicode: need string or buffer, ReturnsEncoded found

Why? Why this inconsistent behavior? Is it a bug? is it intended? Very awkward.

like image 455
nosklo Avatar asked Sep 20 '08 00:09

nosklo


People also ask

How do I fix Unicode encode errors in Python?

Only a limited number of Unicode characters are mapped to strings. Thus, any character that is not-represented / mapped will cause the encoding to fail and raise UnicodeEncodeError. To avoid this error use the encode( utf-8 ) and decode( utf-8 ) functions accordingly in your code.

How do I fix encoding in Python?

The best way to attack the problem, as with many things in Python, is to be explicit. That means that every string that your code handles needs to be clearly treated as either Unicode or a byte sequence. The most systematic way to accomplish this is to make your code into a Unicode-only clean room.

How do I check if a string is Unicode?

In order to check if a String has only Unicode letters in Java, we use the isDigit() and charAt() methods with decision-making statements. The isLetter(int codePoint) method determines whether the specific character (Unicode codePoint) is a letter. It returns a boolean value, either true or false.

How do you make a Unicode string in Python?

You have two options to create Unicode string in Python. Either use decode() , or create a new Unicode string with UTF-8 encoding by unicode(). The unicode() method is unicode(string[, encoding, errors]) , its arguments should be 8-bit strings.


1 Answers

The behaviour does seem confusing, but intensional. I reproduce here the entirety of the unicode documentation from the Python Built-In Functions documentation (for version 2.5.2, as I write this):

unicode([object[, encoding [, errors]]])

Return the Unicode string version of object using one of the following modes:

If encoding and/or errors are given, unicode() will decode the object which can either be an 8-bit string or a character buffer using the codec for encoding. The encoding parameter is a string giving the name of an encoding; if the encoding is not known, LookupError is raised. Error handling is done according to errors; this specifies the treatment of characters which are invalid in the input encoding. If errors is 'strict' (the default), a ValueError is raised on errors, while a value of 'ignore' causes errors to be silently ignored, and a value of 'replace' causes the official Unicode replacement character, U+FFFD, to be used to replace input characters which cannot be decoded. See also the codecs module.

If no optional parameters are given, unicode() will mimic the behaviour of str() except that it returns Unicode strings instead of 8-bit strings. More precisely, if object is a Unicode string or subclass it will return that Unicode string without any additional decoding applied.

For objects which provide a __unicode__() method, it will call this method without arguments to create a Unicode string. For all other objects, the 8-bit string version or representation is requested and then converted to a Unicode string using the codec for the default encoding in 'strict' mode.

New in version 2.0. Changed in version 2.2: Support for __unicode__() added.

So, when you call unicode(r, 'utf-8'), it requires an 8-bit string or a character buffer as the first argument, so it coerces your object using the __str__() method, and attempts to decode that using the utf-8 codec. Without the utf-8, the unicode() function looks for a for a __unicode__() method on your object, and not finding it, calls the __str__() method, as you suggested, attempting to use the default codec to convert to unicode.

like image 128
Blair Conrad Avatar answered Sep 23 '22 06:09

Blair Conrad