When multiple threads request a lock on the same object, does the CLR guarantee that the locks will be acquired in the order they were requested?
I wrote up a test to see if this was true, and it seems to indicate yes, but I'm not sure if this is definitive.
class LockSequence { private static readonly object _lock = new object(); private static DateTime _dueTime; public static void Test() { var states = new List<State>(); _dueTime = DateTime.Now.AddSeconds(5); for (int i = 0; i < 10; i++) { var state = new State {Index = i}; ThreadPool.QueueUserWorkItem(Go, state); states.Add(state); Thread.Sleep(100); } states.ForEach(s => s.Sync.WaitOne()); states.ForEach(s => s.Sync.Close()); } private static void Go(object state) { var s = (State) state; Console.WriteLine("Go entered: " + s.Index); lock (_lock) { Console.WriteLine("{0,2} got lock", s.Index); if (_dueTime > DateTime.Now) { var time = _dueTime - DateTime.Now; Console.WriteLine("{0,2} sleeping for {1} ticks", s.Index, time.Ticks); Thread.Sleep(time); } Console.WriteLine("{0,2} exiting lock", s.Index); } s.Sync.Set(); } private class State { public int Index; public readonly ManualResetEvent Sync = new ManualResetEvent(false); } }
Prints:
Go entered: 0
0 got lock
0 sleeping for 49979998 ticks
Go entered: 1
Go entered: 2
Go entered: 3
Go entered: 4
Go entered: 5
Go entered: 6
Go entered: 7
Go entered: 8
Go entered: 9
0 exiting lock
1 got lock
1 sleeping for 5001 ticks
1 exiting lock
2 got lock
2 sleeping for 5001 ticks
2 exiting lock
3 got lock
3 sleeping for 5001 ticks
3 exiting lock
4 got lock
4 sleeping for 5001 ticks
4 exiting lock
5 got lock
5 sleeping for 5001 ticks
5 exiting lock
6 got lock
6 exiting lock
7 got lock
7 exiting lock
8 got lock
8 exiting lock
9 got lock
9 exiting lock
IIRC, it's highly likely to be in that order, but it's not guaranteed. I believe there are at least theoretically cases where a thread will be woken spuriously, note that it still doesn't have the lock, and go to the back of the queue. It's possible that's only for Wait
/Notify
, but I have a sneaking suspicion it's for locking as well.
I definitely wouldn't rely on it - if you need things to occur in a sequence, build up a Queue<T>
or something similar.
EDIT: I've just found this within Joe Duffy's Concurrent Programming on Windows which basically agrees:
Because monitors use kernel objects internally, they exhibit the same roughly-FIFO behavior that the OS synchronization mechanisms also exhibit (described in the previous chapter). Monitors are unfair, so if another thread tries to acquire the lock before an awakened waiting thread tries to acquire the lock, the sneaky thread is permitted to acquire a lock.
The "roughly-FIFO" bit is what I was thinking of before, and the "sneaky thread" bit is further evidence that you shouldn't make assumptions about FIFO ordering.
Normal CLR locks are not guaranteed to be FIFO.
But, there is a QueuedLock class in this answer which will provide a guaranteed FIFO locking behavior.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With