An example in Meyers's book Effective Modern C++, Item 16.
in a class caching an expensive-to-compute int, you might try to use a pair of std::atomic avriables instead of a mutex:
class Widget {
public:
int magicValue() const {
if (cachedValid) {
return cachedValue;
} else {
auto val1 = expensiveComputation1();
auto val2 = expensiveComputation2();
cachedValue = va1 + val2;
cacheValid = true;
return cachedValue;
}
}
private:
mutable std::atomic<bool> cacheValid { false };
mutable std::atomic<int> cachedValue;
};
This will work, but sometimes it will work a lot harder than it should.Consider: A thread calls Widget::magicValue, sees cacheValid as false, performs the two expensive computations, and assigns their sum to cachedValud. At that point, a second thread calss Widget::magicValue, also sees cacheValid as false, and thus carries out the same expensive computations that the first thread has just finished.
Then he gives a solution with mutex:
class Widget {
public:
int magicValue() const {
std::lock_guard<std::mutex> guard(m);
if (cacheValid) {
return cachedValue;
} else {
auto val1 = expensiveComputation1();
auto val2 = expensiveComputation2();
cachedValue = va1 + val2;
cacheValid = true;
return cachedValue;
}
}
private:
mutable std::mutex m;
mutable bool cacheValid { false };
mutable int cachedValue;
};
But I think the solution is not so effecient, I consider to combine mutex and atomic to make up a Double-Checked Locking Pattern as below.
class Widget {
public:
int magicValue() const {
if (!cacheValid) {
std::lock_guard<std::mutex> guard(m);
if (!cacheValid) {
auto val1 = expensiveComputation1();
auto val2 = expensiveComputation2();
cachedValue = va1 + val2;
cacheValid = true;
}
}
return cachedValue;
}
private:
mutable std::mutex m;
mutable std::atomic<bool> cacheValid { false };
mutable std::atomic<int> cachedValue;
};
Because I am a newbie in multithread programming, so I want to know:
EDIT:
Fixed the code.if (!cachedValue) -> if (!cacheValid)
As pointed out by HappyCactus, the second check if (!cachedValue)
should actually be if (!cachedValid)
. Except this typo, I think your demonstration of the Double-Checked Locking Pattern is correct. However, I think it is unnecessary to use std::atomic
on cachedValue
. The only place where cachedValue
is written to is cachedValue = va1 + val2;
. Before it is completed, no thread will ever reach the statement return cachedValue;
which is the only place cachedValue
is read. Therefore, it is impossible for a write and a read to be concurrent. And there is no problem with concurrent reads.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With