UTF-8 is one of the most commonly used encodings, and Python often defaults to using it.
As per the documentation: This allows you to switch from the default ASCII to other encodings such as UTF-8, which the Python runtime will use whenever it has to decode a string buffer to unicode.
This function is only available at Python start-up time, when Python scans the environment. It has to be called in a system-wide module, sitecustomize.py
, After this module has been evaluated, the setdefaultencoding()
function is removed from the sys
module.
The only way to actually use it is with a reload hack that brings the attribute back.
Also, the use of sys.setdefaultencoding()
has always been discouraged, and it has become a no-op in py3k. The encoding of py3k is hard-wired to "utf-8" and changing it raises an error.
I suggest some pointers for reading:
The answer is NEVER! (unless you really know what you're doing)
9/10 times the solution can be resolved with a proper understanding of encoding/decoding.
1/10 people have an incorrectly defined locale or environment and need to set:
PYTHONIOENCODING="UTF-8"
in their environment to fix console printing problems.
(struck through to avoid re-use) changes the default encoding/decoding used whenever Python 2.x needs to convert a Unicode() to a str() (and vice-versa) and the encoding is not given. I.e:sys.setdefaultencoding("utf-8")
str(u"\u20AC")
unicode("€")
"{}".format(u"\u20AC")
In Python 2.x, the default encoding is set to ASCII and the above examples will fail with:
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 0: ordinal not in range(128)
(My console is configured as UTF-8, so "€" = '\xe2\x82\xac'
, hence exception on \xe2
)
or
UnicodeEncodeError: 'ascii' codec can't encode character u'\u20ac' in position 0: ordinal not in range(128)
will allow these to work for me, but won't necessarily work for people who don't use UTF-8. The default of ASCII ensures that assumptions of encoding are not baked into codesys.setdefaultencoding("utf-8")
also has a side effect of appearing to fix sys.setdefaultencoding("utf-8")
sys.stdout.encoding
, used when printing characters to the console. Python uses the user's locale (Linux/OS X/Un*x) or codepage (Windows) to set this. Occasionally, a user's locale is broken and just requires PYTHONIOENCODING
to fix the console encoding.
Example:
$ export LANG=en_GB.gibberish
$ python
>>> import sys
>>> sys.stdout.encoding
'ANSI_X3.4-1968'
>>> print u"\u20AC"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeEncodeError: 'ascii' codec can't encode character u'\u20ac' in position 0: ordinal not in range(128)
>>> exit()
$ PYTHONIOENCODING=UTF-8 python
>>> import sys
>>> sys.stdout.encoding
'UTF-8'
>>> print u"\u20AC"
€
People have been developing against Python 2.x for 16 years on the understanding that the default encoding is ASCII. UnicodeError
exception handling methods have been written to handle string to Unicode conversions on strings that are found to contain non-ASCII.
From https://anonbadger.wordpress.com/2015/06/16/why-sys-setdefaultencoding-will-break-code/
def welcome_message(byte_string):
try:
return u"%s runs your business" % byte_string
except UnicodeError:
return u"%s runs your business" % unicode(byte_string,
encoding=detect_encoding(byte_string))
print(welcome_message(u"Angstrom (Å®)".encode("latin-1"))
Previous to setting defaultencoding this code would be unable to decode the “Å” in the ascii encoding and then would enter the exception handler to guess the encoding and properly turn it into unicode. Printing: Angstrom (Å®) runs your business. Once you’ve set the defaultencoding to utf-8 the code will find that the byte_string can be interpreted as utf-8 and so it will mangle the data and return this instead: Angstrom (Ů) runs your business.
Changing what should be a constant will have dramatic effects on modules you depend upon. It's better to just fix the data coming in and out of your code.
While the setting of defaultencoding to UTF-8 isn't the root cause in the following example, it shows how problems are masked and how, when the input encoding changes, the code breaks in an unobvious way: UnicodeDecodeError: 'utf8' codec can't decode byte 0x80 in position 3131: invalid start byte
#!/usr/bin/env python
#-*- coding: utf-8 -*-
u = u'moçambique'
print u.encode("utf-8")
print u
chmod +x test.py
./test.py
moçambique
moçambique
./test.py > output.txt
Traceback (most recent call last):
File "./test.py", line 5, in <module>
print u
UnicodeEncodeError: 'ascii' codec can't encode character
u'\xe7' in position 2: ordinal not in range(128)
on shell works , sending to sdtout not , so that is one workaround, to write to stdout .
I made other approach, which is not run if sys.stdout.encoding is not define, or in others words , need export PYTHONIOENCODING=UTF-8 first to write to stdout.
import sys
if (sys.stdout.encoding is None):
print >> sys.stderr, "please set python env PYTHONIOENCODING=UTF-8, example: export PYTHONIOENCODING=UTF-8, when write to stdout."
exit(1)
so, using same example:
export PYTHONIOENCODING=UTF-8
./test.py > output.txt
will work
The first danger lies in reload(sys)
.
When you reload a module, you actually get two copies of the module in your runtime. The old module is a Python object like everything else, and stays alive as long as there are references to it. So, half of the objects will be pointing to the old module, and half to the new one. When you make some change, you will never see it coming when some random object doesn't see the change:
(This is IPython shell)
In [1]: import sys
In [2]: sys.stdout
Out[2]: <colorama.ansitowin32.StreamWrapper at 0x3a2aac8>
In [3]: reload(sys)
<module 'sys' (built-in)>
In [4]: sys.stdout
Out[4]: <open file '<stdout>', mode 'w' at 0x00000000022E20C0>
In [11]: import IPython.terminal
In [14]: IPython.terminal.interactiveshell.sys.stdout
Out[14]: <colorama.ansitowin32.StreamWrapper at 0x3a9aac8>
Now, sys.setdefaultencoding()
proper
All that it affects is implicit conversion str<->unicode
. Now, utf-8
is the sanest encoding on the planet (backward-compatible with ASCII and all), the conversion now "just works", what could possibly go wrong?
Well, anything. And that is the danger.
UnicodeError
being thrown for non-ASCII input, or does the transcoding with an error handler, which now produces an unexpected result. And since all code is tested with the default setting, you're strictly on "unsupported" territory here, and no-one gives you guarantees about how their code will behave.If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With