I'm trying to read and write a dataframe to a pipe-delimited file. Some of the characters are non-Roman letters (`, ç, ñ, etc.). But it breaks when I try to write out the accents as ASCII.
df = pd.read_csv('filename.txt',sep='|', encoding='utf-8')
<do stuff>
newdf.to_csv('output.txt', sep='|', index=False, encoding='ascii')
-------
File "<ipython-input-63-ae528ab37b8f>", line 21, in <module>
newdf.to_csv(filename,sep='|',index=False, encoding='ascii')
File "C:\Users\aliceell\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\core\frame.py", line 1344, in to_csv
formatter.save()
File "C:\Users\aliceell\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\formats\format.py", line 1551, in save
self._save()
File "C:\Users\aliceell\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\formats\format.py", line 1652, in _save
self._save_chunk(start_i, end_i)
File "C:\Users\aliceell\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\formats\format.py", line 1678, in _save_chunk
lib.write_csv_rows(self.data, ix, self.nlevels, self.cols, self.writer)
File "pandas\lib.pyx", line 1075, in pandas.lib.write_csv_rows (pandas\lib.c:19767)
UnicodeEncodeError: 'ascii' codec can't encode character '\xb4' in position 7: ordinal not in range(128)
If I change to_csv to have utf-8 encoding, then I can't read the file in properly:
newdf.to_csv('output.txt',sep='|',index=False,encoding='utf-8')
pd.read_csv('output.txt', sep='|')
> UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb4 in position 2: invalid start byte
My goal is to have a pipe-delimited file that retains the accents and special characters.
Also, is there an easy way to figure out which line read_csv is breaking on? Right now I don't know how to get it to show me the bad character(s).
Check the answer here
It's a much simpler solution:
newdf.to_csv('filename.csv', encoding='utf-8')
You have some characters that are not ASCII and therefore cannot be encoded as you are trying to do. I would just use utf-8
as suggested in a comment.
To check which lines are causing the issue you can try something like this:
def is_not_ascii(string):
return string is not None and any([ord(s) >= 128 for s in string])
df[df[col].apply(is_not_ascii)]
You'll need to specify the column col
you are testing.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With