The Intel documentation says
This instruction can be used with a
LOCK
prefix to allow the instruction to be executed atomically.
My question is
Can CMPXCHG
operate with memory address? From the document it seems not but can anyone confirm that only works with actual VALUE in registers, not memory address?
If CMPXCHG
isn't atomic and a high level language level CAS has to be implemented through LOCK CMPXCHG
(with LOCK
prefix), what's the purpose of introducing such an instruction at all?
(I am asking from a high level language perspective. I.e., if the lock-free algorithm has to be translated into a LOCK CMPXCHG on the x86 platform, then it's still prefixed with LOCK. That means the lock-free algorithms are not better than ones with a carefully written synchronized lock / mutex (on x86 at least). This also seems to make the naked CMPXCHG instruction pointless, as I guess the major point for introducing it, was to support such lock-free operations.)
x86 guarantees that aligned loads and stores up to 64 bits are atomic, but not wider accesses.
On a single-CPU system, cmpxchg is atomic with respect to other threads, or any other code running on the same CPU core.
It seems like part what you're really asking is:
Why isn't the
lock
prefix implicit forcmpxchg
with a memory operand, like it is forxchg
(since 386)?
The simple answer (that others have given) is simply that Intel designed it this way. But this leads to the question:
Why did Intel do that? Is there a use-case for
cmpxchg
withoutlock
?
On a single-CPU system, cmpxchg
is atomic with respect to other threads, or any other code running on the same CPU core. (But not to "system" observers like a memory-mapped I/O device, or a device doing DMA reads of normal memory, so lock cmpxchg
was relevant even on uniprocessor CPU designs).
Context switches can only happen on interrupts, and interrupts happen before or after an instruction, not in the middle. Any code running on the same CPU will see the cmpxchg
as either fully executed or not at all.
For example, the Linux kernel is normally compiled with SMP support, so it uses lock cmpxchg
for atomic CAS. But when booted on a single-processor system, it will patch the lock
prefix to a nop
everywhere that code was inlined, since nop
cmpxchg
runs much faster than lock cmpxchg
. For more info, see this LWN article about Linux's "SMP alternatives" system. It can even patch back to lock
prefixes before hot-plugging a second CPU.
Read more about atomicity of single instructions on uniprocessor systems in this answer, and in @supercat's answer + comments on Can num++
be atomic for int num
. See my answer there for lots of details about how atomicity really works / is implemented for read-modify-write instructions like lock cmpxchg
.
(This same reasoning also applies to cmpxchg8b
/ cmpxchg16b
, and xadd
, which are usually only used for synchonization / atomic ops, not to make single-threaded code run faster. Of course memory-destination instructions like add [mem], reg
are useful outside of the lock add [mem], reg
case.)
You are mixing up high-level locks with the low-level CPU feature that happened to be named LOCK
.
The high-level locks that lock-free algorithms try to avoid can guard arbitrary code fragments whose execution may take arbitrary time and thus, these locks will have to put threads into wait state until the lock is available which is a costly operation, e.g. implies maintaining a queue of waiting threads.
This is an entirely different thing than the CPU LOCK
prefix feature which guards a single instruction only and thus might hold other threads for the duration of that single instruction only. Since this is implemented by the CPU itself, it doesn’t require additional software efforts.
Therefore the challenge of developing lock-free algorithms is not the removal of synchronization entirely, it boils down to reduce the critical section of the code to a single atomic operation which will be provided by the CPU itself.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With