I'm using the FileUpload server control to upload a HTML document previously saved(as webpage; filtered) from MS Word. The charset is windows-1252. The document has smart quotation marks(curly) as well as regular quotes. It also has some blank spaces(apparently) that when looked deeply are characters other than the normal TAB or SPACE.
When capturing the file content in a StreamReader, those special characters are translated to question marks. I assume its because the default encoidng is UTF-8 and the file is Unicode.
I went ahead and created the StreamReader using Unicode encoding, then replacing all the unwanted characters with the correct ones(code that I actually found in stackoverflow). This seems to work....just that I cant convert the string back to UTF-8 to display it in a asp:literal. The code is there, its supposed to work....but the output(ConvertToASCII) is unreadable.
Please look below:
protected void btnUpload_Click(object sender, EventArgs e)
{
StreamReader sreader;
if (uplSOWDoc.HasFile)
{
try
{
if (uplSOWDoc.PostedFile.ContentType == "text/html" || uplSOWDoc.PostedFile.ContentType == "text/plain")
{
sreader = new StreamReader(uplSOWDoc.FileContent, Encoding.Unicode);
string sowText = sreader.ReadToEnd();
sowLiteral.Text = ConvertToASCII(sowText);
lblUploadResults.Text = "File loaded successfully.";
}
else
lblUploadResults.Text = "Upload failed. Just text or html files are allowed.";
}
catch(Exception ex)
{
lblUploadResults.Text = ex.Message;
}
}
}
private string ConvertToASCII(string source)
{
if (source.IndexOf('\u2013') > -1) source = source.Replace('\u2013', '-');
if (source.IndexOf('\u2014') > -1) source = source.Replace('\u2014', '-');
if (source.IndexOf('\u2015') > -1) source = source.Replace('\u2015', '-');
if (source.IndexOf('\u2017') > -1) source = source.Replace('\u2017', '_');
if (source.IndexOf('\u2018') > -1) source = source.Replace('\u2018', '\'');
if (source.IndexOf('\u2019') > -1) source = source.Replace('\u2019', '\'');
if (source.IndexOf('\u201a') > -1) source = source.Replace('\u201a', ',');
if (source.IndexOf('\u201b') > -1) source = source.Replace('\u201b', '\'');
if (source.IndexOf('\u201c') > -1) source = source.Replace('\u201c', '\"');
if (source.IndexOf('\u201d') > -1) source = source.Replace('\u201d', '\"');
if (source.IndexOf('\u201e') > -1) source = source.Replace('\u201e', '\"');
if (source.IndexOf('\u2026') > -1) source = source.Replace("\u2026", "...");
if (source.IndexOf('\u2032') > -1) source = source.Replace('\u2032', '\'');
if (source.IndexOf('\u2033') > -1) source = source.Replace('\u2033', '\"');
byte[] sourceBytes = Encoding.Unicode.GetBytes(source);
byte[] targetBytes = Encoding.Convert(Encoding.Unicode, Encoding.ASCII, sourceBytes);
char[] asciiChars = new char[Encoding.ASCII.GetCharCount(targetBytes, 0, targetBytes.Length)];
Encoding.ASCII.GetChars(targetBytes, 0, targetBytes.Length, asciiChars, 0);
string result = new string(asciiChars);
return result;
}
Also, as I said before, there are some more "transparent" characters that seem to correspond to where the word doc has numbering indentation that I have no idea how to capture their unicode value to replace them....so if you have any tips, please let me know.
Thanks a lot in advance!!
According to StreamReader on MSDN:
The StreamReader object attempts to detect the encoding by looking at the first three bytes of the stream. It automatically recognizes UTF-8, little-endian Unicode, and big-endian Unicode text if the file starts with the appropriate byte order marks. Otherwise, the user-provided encoding is used.
Therefore, if your uploaded file charset is windows-1252
, then your line:
sreader = new StreamReader(uplSOWDoc.FileContent, Encoding.Unicode);
is incorrect, as the file content is not Unicode encoded. Instead, use:
sreader = new StreamReader(uplSOWDoc.FileContent,
Encoding.GetEncoding("Windows-1252"), true);
where the final boolean parameter is to detect the BOM.
sreader = new StreamReader(uplSOWDoc.FileContent, Encoding.Unicode);
Congratulations, you are the one millionth coder to get bitten by “Encoding.Unicode”.
There is no such thing as the “Unicode encoding”. Unicode is the character set, which has many different encodings.
Encoding.Unicode is actually the specific encoding UTF-16LE, in which characters are encoded as UTF-16 “code units” and then each 16-bit code unit is written to bytes in a little-endian order. This is the native in-memory Unicode string format for Windows NT, but you almost never want to use it for reading or writing files. Being a 2-byte-per-unit encoding, it isn't ASCII-compatible, and it's not very efficient for storage or on the wire.
These days UTF-8 is a much more common encoding used for Unicode text. But Microsoft's misnaming of UTF-16LE as “Unicode” continues to confuse and fool users who just want to “support Unicode”. As Encoding.Unicode is a non-ASCII-compatible encoding, trying to read files in an ASCII-superset encoding (such as UTF-8 or a Windows default code page like 1252 Western European) will make an enormous illegible mess of everything, not just the non-ASCII characters.
In this case the encoding your file is stored in is Windows code page 1252. So read it with:
sreader= new StreamReader(uplSOWDoc.FileContent, Encoding.GetEncoding(1252));
I'd leave it at that. Don't bother trying to “convert to ASCII”. Those smart quotes are perfectly good characters and should be supported like any other Unicode character; if you are having problems displaying smart quotes you are probably mangling all other non-ASCII characters too. Best fix the problem that's causing that to happen, rather than try to avoid it for just a few common cases.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With