From what I can gather:
KeAcquireSpinLock
is equivalent to spin_lock_bh
: the one raises IRQL to DISPATCH_LEVEL, the other masks the bottom half interrupts -- functionally the same. While the NT variant keeps the OldIrql, the Linux variant doesn't seem to store "wereInterruptsAlreadyMasked" anywhere. Does this mean spin_unlock_bh
always unmasks them?KeAcquireInterruptSpinLock
is like spin_lock_irqsave
.What is the NT equivalent of spin_lock
?
If spin_unlock_bh
always unmasks interrupts (in NT-speak, always drops IRQL to <DISPATCH_LEVEL ), does it mean spin_lock
is akin to KeAcquireSpinLockAtDpcLevel
?
The raw spin_lock
can be used when you know no interrupts or bottom-halves will ever contend for the lock. By avoiding interrupt masking, you keep interrupt latency down, while still avoiding the overhead of a mutex for critical sections short enough to spin on.
In practice, they seem to be primarily used by things like filesystem drivers, for locking internal cache structures, and other things where there is never a need to block on IO when holding the lock. Since back-halves and driver interrupts never touch the FS driver directly, there's no need to mask interrupts.
I suspect the Windows equivalent would be a CRITICAL_SECTION
, or whatever the NT kernel API equivalent is; however, unlike a NT critical section, Linux spinlocks do not fall back to a mutex when contended; they just keep spinning.
And, yes, spin_unlock_bh
unconditionally restores bottom-halves. You can either keep track of when to enable/disable manually (since you should generally release locks in opposite order of acquisition this usually isn't a problem), or just resort to spin_lock_irqsave
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With