Borrowing Howard Hinnant's example and modifying it to use copy-and-swap, is this op= thread-safe?
struct A {
A() = default;
A(A const &x); // Assume implements correct locking and copying.
A& operator=(A x) {
std::lock_guard<std::mutex> lock_data (_mut);
using std::swap;
swap(_data, x._data);
return *this;
}
private:
mutable std::mutex _mut;
std::vector<double> _data;
};
I believe this thread-safe (remember op='s parameter is passed by value), and the only problem I can find is the one swept under the rug: the copy ctor. However, it would be a rare class that allows copy-assignment but not copy-construction, so that problem exists equally in both alternatives.
Given that self-assignment is so rare (at least for this example) that I don't mind an extra copy if it happens, consider the potential optimization of this != &rhs to be either negligible or a pessimization. Would there be any other reason to prefer or avoid it compared to the original strategy (below)?
A& operator=(A const &rhs) {
if (this != &rhs) {
std::unique_lock<std::mutex> lhs_lock( _mut, std::defer_lock);
std::unique_lock<std::mutex> rhs_lock(rhs._mut, std::defer_lock);
std::lock(lhs_lock, rhs_lock);
_data = rhs._data;
}
return *this;
}
Incidentally, I think this succinctly handles the copy ctor, at least for this class, even if it is a bit obtuse:
A(A const &x) : _data {(std::lock_guard<std::mutex>(x._mut), x._data)} {}
I believe your assignment is thread safe (assuming of course no references outside the class). The performance of it relative to the const A&
variant probably depends on A. I think for many A that your rewrite will be just as fast if not faster. The big counter-example I have is std::vector (and classes like it).
std::vector has a capacity that does not participate in its value. And if the lhs has sufficient capacity relative to the rhs, then reusing that capacity, instead of throwing it away to a temp, can be a performance win.
For example:
std::vector<int> v1(5);
std::vector<int> v2(4);
...
v1 = v2;
In the above example, if v1 keeps its capacity to do the assignment, then the assignment can be done with no heap allocation or deallocation. But if vector uses the swap idiom, then it does one allocation and one deallocation.
I note that as far as thread safety goes, both algorithms lock/unlock two locks. Though the swap variant avoids the need to lock both of them at the same time. I believe on average the cost to lock both at the same time is small. But in heavily contested use cases it could become a concern.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With