I'm currently developing CSV export with XSLT. And CSV file will be used %99 percent with Excel in my case, so I have to consider Excel behavior.
My first problem was German special characters in csv. Even fact that CSV encoding is UTF8, Excel cannot open properly CSV file with UTF8. The special characters are getting weird symbols. I found a solution for this problem. I just added 3 additional bytes(EF BB BF - a.k.a BOM Header) beginning of content bytes. Because UTF8 BOM is way to say that 'hey dude, it is UTF8, open it properly' to Excel. Problem solved!
And my second problem was about separator. The default separator could be comma or semicolon depending on region. I think it is semicolon in Germany and comma in UK. So, in order to prevent this problem, I had to add the line in below:
<xsl:text>sep=;</xsl:text>
or
<xsl:text>sep=,</xsl:text>
(This separator was not implemented as hard-coded)
But my problem which I cannot find any solution is that if you add "sep=;" or "sep=," beginning of the file while the CSV file is being generated with UT8-BOM, the BOM doesn't help for showing special characters properly anymore! And I'm sure that BOM bytes are always in the beginning of byte array. This screen shot is from MS Excel in Mac OS X:
First 3 symbols belong to BOM header.
Have you ever had like this problem or do you have any suggestions? Thank you.
Edit:
I share the printscreens.
a. With BOM and <xsl:text>sep=;</xsl:text>
b. Just with BOM
The Java code:
// Write the bytes ServletOutputStream out = resp.getOutputStream(); if(contentType.toString().equals("CSV")) { // The additional bytes in below is prefix indicates that the content is in UTF-8. out.write(239); out.write(187); out.write(191); } out.write(bytes); // Content bytes, in this case XSL
The XSL code:
<?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="text" version="1.0" encoding="UTF-8" indent="yes" /> <xsl:template match="/"> <xsl:text>sep=;</xsl:text> <table> ... </table> </xsl:template>
Open your CSV file with any text editor that supports both BOM and NON-BOM. Save it again without BOM (for example, in Notepad++ , select Encoding | Encode in UTF-8 and save the file).
The instruction sep= is used in all the countries where comma is used as decimal delimiter, as Italy. Since Excel export CSV using semicolons instead of commas in those countries, if you want to be sure that the file can be read in USA or other countries, you NEED. sep=; on top of file.
The ÿþ character is known as the byte order marking (BOM) character and is commonly found as the first line of a CSV file. ÿþ can not be seen when the CSV is opened with Notepad or Excel for that an Editor is required that can display the BOM (Byte Order Mark).
The evaluated encoding of the open file will display on the bottom bar, far right side. The encodings supported can be seen by going to Settings -> Preferences -> New Document/Default Directory and looking in the drop down.
You are right, there is no way in Excel 2007 to get it load both the encoding and the seperator correctly across different locales when someone double clicks a CSV file.
It seems like when you specify sep= after the BOM it forgets the BOM has told it that it is UTF-8.
You have to specify the BOM because in certain locales Excel does not detect the seperator. For instance in danish, the default seperator is ;. If you output tab or comma seperated text then it does not detect the seperator and in other locales if you seperate with semi-colon it doesn't load. You can test this by changing the locae format in windows settings - excel then picks this up.
From this question: Is it possible to force Excel recognize UTF-8 CSV files automatically?
and the answers it seems the only way is to use UTF16 le encoding with BOM.
Note also that as per http://wiki.scn.sap.com/wiki/display/ABAP/CSV+tests+of+encoding+and+column+separator?original_fqdn=wiki.sdn.sap.com it seems that if you use utf16-le with tab seperators then it works.
I've wondered if excel reads sep=; and then re-calls the method to get the CSV text and loses the BOM - I've tried giving incorrect text and I can't find any work around that tells excel to take both the sep and the encoding.
This is the result of my testing with Excel 2013.
If you're stuck with UTF-8, there is a workaround which consists of BOM + data + sep=;
Input (written with UTF8 encoding)
\ufeffSome;Header;Columns Wîth;Fàncÿ;Stûff sep=;
Output
|Some|Header|Columns| |Wîth|Fàncÿ |Stûff | |sep=| | |
The issue with solution is that while Excel interprets sep=;
properly, it displays sep=
(yes, it swallows the ;
) in the first column of the last row.
However, if you can write the file as UTF16-LE, then there is an actual solution. Use the \t
delimiter without specifying sep
and Excel will play ball.
Input (written with UTF16-LE encoding)
\ufeffSome;Header;Columns Wîth;Fàncÿ;Stûff
Output
|Some|Header|Columns| |Wîth|Fàncÿ |Stûff |
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With