Writing Stringbuilder
to file asynchronously. This code takes control of a file, writes a stream to it and releases it. It deals with requests from asynchronous operations, which may come in at any time.
The FilePath
is set per class instance (so the lock Object
is per instance), but there is potential for conflict since these classes may share FilePaths
. That sort of conflict, as well as all other types from outside the class instance, would be dealt with retries.
Is this code suitable for its purpose? Is there a better way to handle this that means less (or no) reliance on the catch and retry mechanic?
Also how do I avoid catching exceptions that have occurred for other reasons.
public string Filepath { get; set; } private Object locker = new Object(); public async Task WriteToFile(StringBuilder text) { int timeOut = 100; Stopwatch stopwatch = new Stopwatch(); stopwatch.Start(); while (true) { try { //Wait for resource to be free lock (locker) { using (FileStream file = new FileStream(Filepath, FileMode.Append, FileAccess.Write, FileShare.Read)) using (StreamWriter writer = new StreamWriter(file, Encoding.Unicode)) { writer.Write(text.ToString()); } } break; } catch { //File not available, conflict with other class instances or application } if (stopwatch.ElapsedMilliseconds > timeOut) { //Give up. break; } //Wait and Retry await Task.Delay(5); } stopwatch.Stop(); }
You can write to file in a thread-safe manner using a mutex lock via the threading.
Thread safety is a computer programming concept applicable to multi-threaded code. Thread-safe code only manipulates shared data structures in a manner that ensures that all threads behave properly and fulfill their design specifications without unintended interaction.
It's fine - assuming nothing's writing to the file at the same time, in which case you may not be able to open the file (or might see partial writes). As per the documentation of File : Any public static (Shared in Visual Basic) members of this type are thread safe.
How you approach this is going to depend a lot on how frequently you're writing. If you're writing a relatively small amount of text fairly infrequently, then just use a static lock and be done with it. That might be your best bet in any case because the disk drive can only satisfy one request at a time. Assuming that all of your output files are on the same drive (perhaps not a fair assumption, but bear with me), there's not going to be much difference between locking at the application level and the lock that's done at the OS level.
So if you declare locker
as:
static object locker = new object();
You'll be assured that there are no conflicts with other threads in your program.
If you want this thing to be bulletproof (or at least reasonably so), you can't get away from catching exceptions. Bad things can happen. You must handle exceptions in some way. What you do in the face of error is something else entirely. You'll probably want to retry a few times if the file is locked. If you get a bad path or filename error or disk full or any of a number of other errors, you probably want to kill the program. Again, that's up to you. But you can't avoid exception handling unless you're okay with the program crashing on error.
By the way, you can replace all of this code:
using (FileStream file = new FileStream(Filepath, FileMode.Append, FileAccess.Write, FileShare.Read)) using (StreamWriter writer = new StreamWriter(file, Encoding.Unicode)) { writer.Write(text.ToString()); }
With a single call:
File.AppendAllText(Filepath, text.ToString());
Assuming you're using .NET 4.0 or later. See File.AppendAllText.
One other way you could handle this is to have the threads write their messages to a queue, and have a dedicated thread that services that queue. You'd have a BlockingCollection
of messages and associated file paths. For example:
class LogMessage { public string Filepath { get; set; } public string Text { get; set; } } BlockingCollection<LogMessage> _logMessages = new BlockingCollection<LogMessage>();
Your threads write data to that queue:
_logMessages.Add(new LogMessage("foo.log", "this is a test"));
You start a long-running background task that does nothing but service that queue:
foreach (var msg in _logMessages.GetConsumingEnumerable()) { // of course you'll want your exception handling in here File.AppendAllText(msg.Filepath, msg.Text); }
Your potential risk here is that threads create messages too fast, causing the queue to grow without bound because the consumer can't keep up. Whether that's a real risk in your application is something only you can say. If you think it might be a risk, you can put a maximum size (number of entries) on the queue so that if the queue size exceeds that value, producers will wait until there is room in the queue before they can add.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With