Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Unicode file with python and fileinput

Tags:

python

utf-8

I am becoming more and more convinced that the business of file encodings is made as confusing as possible on purpose. I have a problem with reading a file in utf-8 encoding that contains just one line:

“blabla this is some text”

(note that the quotation marks are some fancy version of the standard quotation marks).

Now, I run this piece of Python code on it:

import fileinput
def charinput(paths):
    with open(paths) as fi:
        for line in fi:
            for char in line:
                yield char
i = charinput('path/to/file.txt')
for item in i:
    print(item)

with two results: If i run my python code from command prompt, the result is some strange characters, followed by an error mesage:

ď
»
ż
â
Traceback (most recent call last):
  File "krneki.py", line 11, in <module>
    print(item)
  File "C:\Python34\lib\encodings\cp852.py", line 19, in encode
    return codecs.charmap_encode(input,self.errors,encoding_map)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\u20ac' in position
0: character maps to <undefined>

I get the idea that the problem comes from the fact that Python tries to read a "wrongly" encoded document, but is there a way to order fileinput.input to read utf-8?


EDIT: Some really weird stuff is happening and I have NO idea how any of it works. After saving the same file as before in notepad++, the python code now runs within IDLE and results in the following output (newlines removed):

“blabla this is some text”

while I can get the command prompt to not crash if I first input chcp 65001. Running the file then results in

Ä»żâ€śblabla this is some text ”

Any ideas? This is a horrible mess, if you ask me, but it is vital I understand it...

like image 882
5xum Avatar asked Jul 15 '14 09:07

5xum


1 Answers

Encoding

Every file is encoded. The byte 0x4C is interpreted as latin capital letter L according to the ASCII encoding, but as less-than sign ('<') according to the EBCDIC encoding. There Ain't No Such Thing As Plain Text.

There are single byte character sets like ASCII that use a single byte to encode each symbol, there are double byte character sets like KS X 1001 that use two bytes to encode each symbol, and there are encodings like the popular UTF-8 that use a variable number of bytes per symbol.

UTF-8 has become the most popular encoding for new applications, so I'll give some examples: The Latin Capital Letter A is stored as a single byte: 0x41. The Left Double Quotation Mark (“) is stored as three bytes: 0xE2 0x80 0x9C. The emoji Pile of Poo is stored as four bytes: 0xF0 0x9F 0x92 0xA9.

Any program that reads a file and has to interpret the bytes as symbols has to know (or to guess) which encoding was used.

If you are not familiar with Unicode or UTF-8 you might want to read http://www.joelonsoftware.com/articles/unicode.html

Reading Files in Python 3

Python 3's builtin function open() has an optional keywords argument encoding to support different encodings. To open a UTF-8 encoded file you can write open(filename, encoding="utf-8") and Python will take care of the decoding.

Also, the fileinput module supports encodings via the openhook keyword argument: fileinput.input(filename, openhook=fileinput.hook_encoded("utf-8")).

If you are not familiar with Python and Unicode or UTF-8 you should read http://docs.python.org/3/howto/unicode.html I also found some nice tricks in http://www.chirayuk.com/snippets/python/unicode

Reading Strings in Python 2

In Python 2 open() does not know about encodings. Instead you can use the codecs module to specify which encoding should be used: codecs.open(filename, encoding="utf-8")

The best source for Python2/Unicode enlightment is http://docs.python.org/2/howto/unicode.html

like image 93
Yurim Avatar answered Nov 18 '22 08:11

Yurim