Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

When is ReaderWriterLockSlim better than a simple lock?

I'm doing a very silly benchmark on the ReaderWriterLock with this code, where reading happens 4x more often than writting:

class Program
{
    static void Main()
    {
        ISynchro[] test = { new Locked(), new RWLocked() };

        Stopwatch sw = new Stopwatch();

        foreach ( var isynchro in test )
        {
            sw.Reset();
            sw.Start();
            Thread w1 = new Thread( new ParameterizedThreadStart( WriteThread ) );
            w1.Start( isynchro );

            Thread w2 = new Thread( new ParameterizedThreadStart( WriteThread ) );
            w2.Start( isynchro );

            Thread r1 = new Thread( new ParameterizedThreadStart( ReadThread ) );
            r1.Start( isynchro );

            Thread r2 = new Thread( new ParameterizedThreadStart( ReadThread ) );
            r2.Start( isynchro );

            w1.Join();
            w2.Join();
            r1.Join();
            r2.Join();
            sw.Stop();

            Console.WriteLine( isynchro.ToString() + ": " + sw.ElapsedMilliseconds.ToString() + "ms." );
        }

        Console.WriteLine( "End" );
        Console.ReadKey( true );
    }

    static void ReadThread(Object o)
    {
        ISynchro synchro = (ISynchro)o;

        for ( int i = 0; i < 500; i++ )
        {
            Int32? value = synchro.Get( i );
            Thread.Sleep( 50 );
        }
    }

    static void WriteThread( Object o )
    {
        ISynchro synchro = (ISynchro)o;

        for ( int i = 0; i < 125; i++ )
        {
            synchro.Add( i );
            Thread.Sleep( 200 );
        }
    }

}

interface ISynchro
{
    void Add( Int32 value );
    Int32? Get( Int32 index );
}

class Locked:List<Int32>, ISynchro
{
    readonly Object locker = new object();

    #region ISynchro Members

    public new void Add( int value )
    {
        lock ( locker ) 
            base.Add( value );
    }

    public int? Get( int index )
    {
        lock ( locker )
        {
            if ( this.Count <= index )
                return null;
            return this[ index ];
        }
    }

    #endregion
    public override string ToString()
    {
        return "Locked";
    }
}

class RWLocked : List<Int32>, ISynchro
{
    ReaderWriterLockSlim locker = new ReaderWriterLockSlim();

    #region ISynchro Members

    public new void Add( int value )
    {
        try
        {
            locker.EnterWriteLock();
            base.Add( value );
        }
        finally
        {
            locker.ExitWriteLock();
        }
    }

    public int? Get( int index )
    {
        try
        {
            locker.EnterReadLock();
            if ( this.Count <= index )
                return null;
            return this[ index ];
        }
        finally
        {
            locker.ExitReadLock();
        }
    }

    #endregion

    public override string ToString()
    {
        return "RW Locked";
    }
}

But I get that both perform in more or less the same way:

Locked: 25003ms.
RW Locked: 25002ms.
End

Even making the read 20 times more often that writes, the performance is still (almost) the same.

Am I doing something wrong here?

Kind regards.

like image 894
vtortola Avatar asked Nov 18 '10 16:11

vtortola


4 Answers

In your example, the sleeps mean that generally there is no contention. An uncontended lock is very fast. For this to matter, you would need a contended lock; if there are writes in that contention, they should be about the same (lock may even be quicker) - but if it is mostly reads (with a write contention rarely), I would expect the ReaderWriterLockSlim lock to out-perform the lock.

Personally, I prefer another strategy here, using reference-swapping - so reads can always read without ever checking / locking / etc. Writes make their change to a cloned copy, then use Interlocked.CompareExchange to swap the reference (re-applying their change if another thread mutated the reference in the interim).

like image 111
Marc Gravell Avatar answered Oct 13 '22 05:10

Marc Gravell


My own tests indicate that ReaderWriterLockSlim has about 5x the overhead as compared to a normal lock. That means for the RWLS to outperform a plain old lock the following conditions would generally be occurring.

  • The number of readers significantly outnumbers the writers.
  • The lock would have to be held long enough to overcome the additional overhead.

In most real applications these two conditions are not enough to overcome that additional overhead. In your code specifically, the locks are held for such a short period of time that the lock overhead will probably be the dominating factor. If you were to move those Thread.Sleep calls inside the lock then you would probably get a different result.

like image 28
Brian Gideon Avatar answered Oct 13 '22 04:10

Brian Gideon


There's no contention in this program. The Get and Add methods execute in a few nanoseconds. The odds that multiple threads hit those methods at the exact time are vanishingly small.

Put a Thread.Sleep(1) call in them and remove the sleep from the threads to see the difference.

like image 43
Hans Passant Avatar answered Oct 13 '22 05:10

Hans Passant


Edit 2: Simply removing the Thread.Sleep calls from ReadThread and WriteThread, I saw Locked outperform RWLocked. I believe Hans hit the nail on the head here; your methods are too fast and create no contention. When I added Thread.Sleep(1) to the Get and Add methods of Locked and RWLocked (and used 4 read threads against 1 write thread), RWLocked beat the pants off of Locked.


Edit: OK, if I were actually thinking when I first posted this answer, I would've realized at least why you put the Thread.Sleep calls in there: you were trying to reproduce the scenario of reads happening more frequently than writes. This is just not the right way to do that. Instead, I would introduce extra overhead to your Add and Get methods to create a greater chance of contention (as Hans suggested), create more read threads than write threads (to ensure more frequent reads than writes), and remove the Thread.Sleep calls from ReadThread and WriteThread (which actually reduce contention, achieving the opposite of what you want).


I like what you've done so far. But here are a few issues I see right off the bat:

  1. Why the Thread.Sleep calls? These are just inflating your execution times by a constant amount, which is going to artificially make performance results converge.
  2. I also wouldn't include the creation of new Thread objects in the code that's measured by your Stopwatch. That is not a trivial object to create.

Whether you will see a significant difference once you address the two issues above, I don't know. But I believe they should be addressed before the discussion continues.

like image 26
Dan Tao Avatar answered Oct 13 '22 05:10

Dan Tao