Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

how to write a unicode csv in Python 2.7

I want to write data to files where a row from a CSV should look like this list (directly from the Python console):

row = ['\xef\xbb\xbft_11651497', 'http://kozbeszerzes.ceu.hu/entity/t/11651497.xml', "Szabolcs Mag '98 Kft.", 'ny\xc3\xadregyh\xc3\xa1za', 'ny\xc3\xadregyh\xc3\xa1za', '4400', 't\xc3\xbcnde utca 20.', 47.935175, 21.744975, u'Ny\xedregyh\xe1za', u'Borb\xe1nya', u'Szabolcs-Szatm\xe1r-Bereg', u'Ny\xedregyh\xe1zai', u'20', u'T\xfcnde utca', u'Magyarorsz\xe1g', u'4405']

Py2k does not do Unicode, but I had a UnicodeWriter wrapper:

import cStringIO, codecs
class UnicodeWriter:
    """
    A CSV writer which will write rows to CSV file "f",
    which is encoded in the given encoding.
    """

    def __init__(self, f, dialect=csv.excel, encoding="utf-8", **kwds):
        # Redirect output to a queue
        self.queue = cStringIO.StringIO()
        self.writer = csv.writer(self.queue, dialect=dialect, **kwds)
        self.stream = f
        self.encoder = codecs.getincrementalencoder(encoding)()

    def writerow(self, row):
        self.writer.writerow([unicode(s).encode("utf-8") for s in row])
        # Fetch UTF-8 output from the queue ...
        data = self.queue.getvalue()
        data = data.decode("utf-8")
        # ... and reencode it into the target encoding
        data = self.encoder.encode(data)
        # write to the target stream
        self.stream.write(data)
        # empty queue
        self.queue.truncate(0)

    def writerows(self, rows):
        for row in rows:
            self.writerow(row)

However, these lines still produce the dreaded encoding error message below:

f.write(codecs.BOM_UTF8)
writer = UnicodeWriter(f)
writer.writerow(row)

UnicodeEncodeError: 'ascii' codec can't encode character u'\xfc' in position 9: ordinal not in range(128)

What is there to do? Thanks!

like image 967
László Avatar asked Mar 29 '14 16:03

László


1 Answers

You are passing bytestrings containing non-ASCII data in, and these are being decoded to Unicode using the default codec at this line:

self.writer.writerow([unicode(s).encode("utf-8") for s in row])

unicode(bytestring) with data that cannot be decoded as ASCII fails:

>>> unicode('\xef\xbb\xbft_11651497')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xef in position 0: ordinal not in range(128)

Decode the data to Unicode before passing it to the writer:

row = [v.decode('utf8') if isinstance(v, str) else v for v in row]

This assumes that your bytestring values contain UTF-8 data instead. If you have a mix of encodings, try to decode to Unicode at the point of origin; where your program first sourced the data. You really want to do so anyway, regardless of where the data came from or if it already was encoded to UTF-8 as well.

like image 76
Martijn Pieters Avatar answered Sep 22 '22 11:09

Martijn Pieters