Boost provides a sample atomically reference counted shared pointer
Here is the relevant code snippet and the explanation for the various orderings used:
class X {
public:
typedef boost::intrusive_ptr<X> pointer;
X() : refcount_(0) {}
private:
mutable boost::atomic<int> refcount_;
friend void intrusive_ptr_add_ref(const X * x)
{
x->refcount_.fetch_add(1, boost::memory_order_relaxed);
}
friend void intrusive_ptr_release(const X * x)
{
if (x->refcount_.fetch_sub(1, boost::memory_order_release) == 1) {
boost::atomic_thread_fence(boost::memory_order_acquire);
delete x;
}
}
};
Increasing the reference counter can always be done with memory_order_relaxed: New references to an object can only be formed from an existing reference, and passing an existing reference from one thread to another must already provide any required synchronization.
It is important to enforce any possible access to the object in one thread (through an existing reference) to happen before deleting the object in a different thread. This is achieved by a "release" operation after dropping a reference (any access to the object through this reference must obviously happened before), and an "acquire" operation before deleting the object.
It would be possible to use memory_order_acq_rel for the fetch_sub operation, but this results in unneeded "acquire" operations when the reference counter does not yet reach zero and may impose a performance penalty.
I am not able to understand why the memory_order_acquire
barrier is necessary before the delete x
operation. Specifically, how is it safe for the compiler/processor to reorder the memory operations of delete x
before the fetch_sub
and the test on the value of x == 1
without violating the single threaded semantics?
EDIT I guess, my question wasn't very clear. Here is a rephrased version:
Will the control dependency between the read of x (x->refcount_.fetch_sub(1, boost::memory_order_release) == 1
) and the delete x
operation provide any ordering guarantee at all? Even considering a single threaded program, is it possible for the compiler/processor to reorder the instructions corresponding to the delete x
operation before the fetch_sub
and the comparison?. It would be really helpful if the answer was as low-level as possible and included an example scenario where the delete operation gets reordered (without affecting the single threaded semantics) thus illustrating the need to preserve ordering.
Consider two threads, each holding one reference to the object, which are the last two references:
------------------------------------------------------------
Thread 1 Thread 2
------------------------------------------------------------
// play with x here
fetch_sub(...)
fetch_sub(...)
// nothing
delete x;
You have to ensure that any changes made to the object by Thread 1 in //play with x here
is visible to Thread 2 when it calls delete x;
. For this you need an acquire fence, which, together with the memory_order_release
on the fetch_sub()
calls, guarantees that the changes made by Thread 1 will be visible.
From, http://en.cppreference.com/w/cpp/atomic/memory_order
memory_order_acquire -- A load operation with this memory order performs the acquire operation on the affected memory location: prior writes made to other memory locations by the thread that did the release become visible in this thread.
...
Release-Acquire ordering
If an atomic store in thread A is tagged std::memory_order_release and an atomic load in thread B from the same variable is tagged std::memory_order_acquire, all memory writes (non-atomic and relaxed atomic) that happened-before the atomic store from the point of view of thread A, become visible side-effects in thread B, that is, once the atomic load is completed, thread B is guaranteed to see everything thread A wrote to memory.
The synchronization is established only between the threads releasing and acquiring the same atomic variable. Other threads can see different order of memory accesses than either or both of the synchronized threads.
On strongly-ordered systems (x86, SPARC TSO, IBM mainframe), release-acquire ordering is automatic for the majority of operations. No additional CPU instructions are issued for this synchronization mode, only certain compiler optimizations are affected (e.g. the compiler is prohibited from moving non-atomic stores past the atomic store-release or perform non-atomic loads earlier than the atomic load-acquire). On weakly-ordered systems (ARM, Itanium, PowerPC), special CPU load or memory fence instructions have to be used.
This means that release allows other threads to synchronize pending operations from current thread, while the later acquire fetches all modified changes from the other threads.
On strongly-ordered systems, this is not as important. I don't think these instructions even generate code as the CPU automatically locks cache lines before any writes can occur. The cache is guaranteed to be consistent. But on weekly ordered systems, while atomic operations are well defined, there could be pending operations to other parts of memory.
So, let's say threads A and B and both share some data D.
with the thread fence acquire before delete, the current thread synchronizes all pending operations from other threads in its address space. And when delete happens, it sees what A did in #1.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With