Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Lock(file) for write between pages?

Tags:

c#

asp.net

I have an application that uses an XML file to store certain data that does not need to be in the DB. This file, while rarely, might be written to between pages at the same time. This creates an issue, most probably causing one of the pages to crash. I was wondering if there is a way to have a page render wait until the other is done writing to the file.

I am not sure if I can assume that every page render runs in its own thread; if it does, then can I halt that thread like a lock() command to wait for the resource to be available. If so, how can I implement a lock() on a file (since its not a memory segment)?

If its not threaded or I cannot use lock(), what other ways are there to make sure that the file is written to, but if someone else is using that file, it can wait for the file to be available?

There are a lot of methods of managing write order and rights but for this it is simple; if you came second, you will write in after the other person is done. I am not so concerned with write collisions at this point, but concurrent writing is something that needs to be addressed.

Any help is greatly appreciated!

like image 919
Serguei Fedorov Avatar asked Feb 05 '26 13:02

Serguei Fedorov


2 Answers

EDIT: Running some more tests (on mono), I'm not so sure this is the best approach.

I didn't manage to crash anything, but execution just pauses after Run(), for a few seconds, prior to yielding back to caller. So I suspect that's not a good sign in favor of this code snip.

That being said, the tested scenario is far from realistic.


Another option is to open the file in FileShare.ReadWrite mode, and let the OS care about lockings.

    public void Run()
    {
        var file = @"/home/me/test.txt";
        var threads = new Thread[3];

        for (int i = 0; i < threads.Length; i++) {
            int j = i;
            threads[j] = new Thread( s => { 
            Console.WriteLine("thread " + j + " is running...");
            using (var stream = File.Open(file, FileMode.OpenOrCreate, 
                   FileAccess.ReadWrite, FileShare.ReadWrite))
            {
                Console.WriteLine("thread " + j + " is holding the file!");
                var data = ASCIIEncoding.ASCII.GetBytes("hello, I'm stream " + j);
                stream.Write(data, 0, data.Length);
                Thread.Sleep(1000);
            }});    
        }   

        foreach (var t in threads)
            t.Start();

        foreach (var t in threads)
            t.Join();

        Console.WriteLine(File.ReadAllText(file));
    }

Output:

thread 1 is running...
thread 0 is running...
thread 2 is running...
thread 2 is holding the file!
thread 1 is holding the file!
thread 0 is holding the file!
hello, I'm stream 0
like image 198
avishayp Avatar answered Feb 08 '26 04:02

avishayp


If it's from a single machine, you could use a Semaphore http://msdn.microsoft.com/en-us/library/system.threading.semaphore.aspx:

private static Semaphore _fileSemaphore = new Semaphore(1, 1, "myFileGlobalSemaphore");

// later...

try
{
    _fileSemaphore.WaitOne();
    // do stuff..
}
finally
{
    _fileSemaphore.Release();
}

That's kind of like a system-wide lock. Not great though, would suggest a DB. Plus, other things might cause file access to fail. You could implement something which creates a FileStream with the appropriate locking mode, which will fail if you try to open for writing by two sources, but it won't do a lock-like scenario (which uses the Monitor class internally)

Btw: "does not need to be stored in the DB". The fact you're asking this question suggests it really should be. If that's a pain to implement, you might want to rethink your data access strategy. Stuff like Entity Framework w/ Code First makes it very easy to "chuck" stuff in the DB.

like image 20
Kieren Johnstone Avatar answered Feb 08 '26 02:02

Kieren Johnstone



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!