This article by Jeff Preshing states that the double-checked locking pattern (DCLP) is fixed in C++11. The classical example used for this pattern is the singleton pattern but I happen to have a different use case and I am still lacking experience in handling "atomic<> weapons" - maybe someone over here can help me out.
Is the following piece of code a correct DCLP implementation as described by Jeff under "Using C++11 Sequentially Consistent Atomics"?
class Foo {
std::shared_ptr<B> data;
std::mutex mutex;
void detach()
{
if (data.use_count() > 1)
{
std::lock_guard<std::mutex> lock{mutex};
if (data.use_count() > 1)
{
data = std::make_shared<B>(*data);
}
}
}
public:
// public interface
};
The shared_ptr type is a smart pointer in the C++ standard library that is designed for scenarios in which more than one owner might have to manage the lifetime of the object in memory.
std::make_shared<T>(*this) would make a new copy of the object which is owned by the new shared_ptr .
No, this is not a correct implementation of DCLP.
The thing is that your outer check data.use_count() > 1
accesses the object (of type B with reference count
), which can be deleted (unreferenced) in mutex-protected part. Any sort of memory fences cannot help there.
Why data.use_count() accesses the object:
Assume these operations have been executed:
shared_ptr<B> data1 = make_shared<B>(...);
shared_ptr<B> data = data1;
Then you have following layout (weak_ptr
support is not shown here):
data1 [allocated with B::new()] data
--------------------------
[pointer type] ref; --> |atomic<int> m_use_count;| <-- [pointer type] ref
|B obj; |
--------------------------
Each shared_ptr
object is just a pointer, which points to allocated memory region. This memory region embeds object of type B
plus atomic counter, reflecting number of shared_ptr
's, pointed to given object. When this counter becomes zero, memory region is freed(and B
object is destroyed). Exactly this counter is returned by shared_ptr::use_count()
.
UPDATE: Execution, which can lead to accessing memory which is already freed (initially, two shared_ptr's point to the same object, .use_count()
is 2):
/* Thread 1 */ /* Thread 2 */ /* Thread 3 */
Enter detach() Enter detach()
Found `data.use_count()` > 1
Enter critical section
Found `data.use_count()` > 1
Dereference `data`,
found old object.
Unreference old `data`,
`use_count` becomes 1
Delete other shared_ptr,
old object is deleted
Assign new object to `data`
Access old object
(for check `use_count`)
!! But object is freed !!
Outer check should only take a pointer to object for decide, whether to need aquire lock.
BTW, even your implementation would be correct, it has a little sence:
If data
(and detach
) can be accessed from several threads at the same time, object's uniqueness gives no advantages, since it can be accessed from the several threads. If you want to change object, all accesses to data
should be protected by outer mutex, in that case detach()
cannot be executed concurrently.
If data
(and detach
) can be accessed only by single thread at the same time, detach
implementation doesn't need any locking at all.
This constitutes a data race if two threads invoke detach
on the same instance of Foo
concurrently, because std::shared_ptr<B>::use_count()
(a read-only operation) would run concurrently with the std::shared_ptr<B>
move-assignment operator (a modifying operation), which is a data race and hence a cause of undefined behavior. If Foo
instances are never accessed concurrently, on the other hand, there is no data race, but then the std::mutex
would be useless in your example. The question is: how does data
's pointee become shared in the first place? Without this crucial bit of information, it is hard to tell if the code is safe even if a Foo
is never used concurrently.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With