The original code in Linux kernel is:
static inline void __raw_spin_lock_irq(raw_spinlock_t *lock)
{
local_irq_disable();
preempt_disable();
spin_acquire(&lock->dep_map, 0, 0, _RET_IP_);
LOCK_CONTENDED(lock, do_raw_spin_trylock, do_raw_spin_lock);
}
I think there is no execution path can preempt current path after local IRQ is disabled.
Because all common hard IRQs are disabled, there should be no softirq occur and also no tick to kick schedule wheel. I think current path is safe. So why there is a preempt_disable()
?
As far as I can tell, preempt_disable()
calls were added to quite a few locking primitives, including spin_lock_irq
, by Dave Miller on December 4th, 2002, and released in 2.5.51. The commit message isn't helpful; it just says "[SPINLOCK]: Fix non-SMP nopping spin/rwlock macros."
I believe the Proper Locking Under a Preemptible Kernel documentation explains this well enough. The final section titled "PREVENTING PREEMPTION USING INTERRUPT DISABLING" begins,
It is possible to prevent a preemption event using local_irq_disable and
local_irq_save. Note, when doing so, you must be very careful ...
I skimmed the patch mentioned by Sharp and found that disabling irq can disable preemption implicitly but is risky.
However, keep in mind that relying on irqs being disabled is a risky business. Any spin_unlock() that decreases the preemption count to 0 can trigger a reschedule. Even a simple printk() might trigger such a reschedule. So rely on implicit preemption-disabling only if you know that this sort of thing cannot happen in your code path. The best policy is to rely on implicit preemption-disabling only for short times and only so long as your remain within your own code.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With