I'm using a spin lock to protect a very small critical section. Contention happens very rarely so a spin lock is more appropriate than a regular mutex.
My current code is as follows, and assumes x86 and GCC:
volatile int exclusion = 0; void lock() { while (__sync_lock_test_and_set(&exclusion, 1)) { // Do nothing. This GCC builtin instruction // ensures memory barrier. } } void unlock() { __sync_synchronize(); // Memory barrier. exclusion = 0; }
So I'm wondering:
__sync_lock_release
. I'm not an expert on memory barriers so I'm not sure whether it's okay for me to use this instead of __sync_synchronize
.I do not care at all about contention. There may be 1, maybe 2 other threads trying to lock the spin lock once every few days.
The most basic spinlock works by using a boolean (or single bit) to indicate whether the lock is held or not. To acquire the lock a atomic exchange operation is used to set the boolean to true .
Generally, spinlocks can be grouped into fair and unfair as well as scalable and unscalable implementations.
However, spinlocks become wasteful if held for longer durations, as they may prevent other threads from running and require rescheduling. The longer a thread holds a lock, the greater the risk that the thread will be interrupted by the OS scheduler while holding the lock.
From a latency perspective, this spinlock is as good as it gets. We only execute one atomic operation, and there is no way we can do better than this. Regarding delay, this implementation could perform well.
Looks fine to me. Btw, here is the textbook implementation that is more efficient even in the contended case.
void lock(volatile int *exclusion) { while (__sync_lock_test_and_set(exclusion, 1)) while (*exclusion) ; }
So I'm wondering:
* Is it correct?
In the context mentioned, I would say yes.
* Is it optimal?
That's a loaded question. By reinventing the wheel you are also reinventing a lot of problems that have been solved by other implementations
I'd expect a waste loop on failure where you aren't trying to access the lock word.
Use of a full barrier in the unlock only needs to have release semantics (that's why you'd use __sync_lock_release, so that you'd get st1.rel on itanium instead of mf, or a lwsync on powerpc, ...). If you really only care about x86 or x86_64 the types of barriers used here or not don't matter as much (but if you where to make the jump to intel's itanium for an HP-IPF port then you wouldn't want this).
you don't have the pause() instruction that you'd normally put before your waste loop.
when there is contention you want something, semop, or even a dumb sleep in desperation. If you really need the performance that this buys you then the futex suggestion is probably a good one. If you need the performance this buys you bad enough to maintain this code you have a lot of research to do.
Note that there was a comment saying that the release barrier wasn't required. That isn't true even on x86 because the release barrier also serves as an instruction to the compiler to not shuffle other memory accesses around the "barrier". Very much like what you'd get if you used asm ("" ::: "memory" ).
* on compare and swap
On x86 the sync_lock_test_and_set will map to a xchg instruction which has an implied lock prefix. Definitely the most compact generated code (esp. if you use a byte for the "lock word" instead of an int), but no less correct than if you used LOCK CMPXCHG. Use of compare and swap can be used for fancier algorthims (like putting a non-zero pointer to metadata for the first "waiter" into the lockword on failure).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With