Consider:
public static void ConvertFileToUnicode1252(string filePath, Encoding srcEncoding)
{
try
{
StreamReader fileStream = new StreamReader(filePath);
Encoding targetEncoding = Encoding.GetEncoding(1252);
string fileContent = fileStream.ReadToEnd();
fileStream.Close();
// Saving file as ANSI 1252
Byte[] srcBytes = srcEncoding.GetBytes(fileContent);
Byte[] ansiBytes = Encoding.Convert(srcEncoding, targetEncoding, srcBytes);
string ansiContent = targetEncoding.GetString(ansiBytes);
// Now writes contents to file again
StreamWriter ansiWriter = new StreamWriter(filePath, false);
ansiWriter.Write(ansiContent);
ansiWriter.Close();
//TODO -- log success details
}
catch (Exception e)
{
throw e;
// TODO -- log failure details
}
}
The above piece of code returns an out-of-memory exception for large files and only works for small-sized files.
I think still using a StreamReader
and a StreamWriter
but reading blocks of characters instead of all at once or line by line is the most elegant solution. It doesn't arbitrarily assume the file consists of lines of manageable length, and it also doesn't break with multi-byte character encodings.
public static void ConvertFileEncoding(string srcFile, Encoding srcEncoding, string destFile, Encoding destEncoding)
{
using (var reader = new StreamReader(srcFile, srcEncoding))
using (var writer = new StreamWriter(destFile, false, destEncoding))
{
char[] buf = new char[4096];
while (true)
{
int count = reader.Read(buf, 0, buf.Length);
if (count == 0)
break;
writer.Write(buf, 0, count);
}
}
}
(I wish StreamReader
had a CopyTo
method like Stream
does, if it had, this would be essentially a one-liner!)
Don't readToEnd and read it like line by line or X characters at a time. If you read to end, you put your whole file into the buffer at once.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With