Specifically, in a const member function if I use mutex.lock()
at the beginning of it, and mutex.unlock()
right before the return, I get crashes when running it inside an OpenMP loop. But if I replace those two calls with one QMutexLocker(&mutex)
at the beginning of the function, it runs smoothly.
Visual Studio 2010, Qt 4.8.
I would expect the two codes to be equivalent, but they are obviously not. What am I missing here?
EDIT: Although this doesn't reproduce the problem, a small example:
class TileCache
{
public:
bool fillBuffer(const std::string& name) const {
//QMutexLocker lock(&mCacheMutex);
mCacheMutex.lock();
auto ite = mCache.find(name);
if(ite == mCache.end())
mCache.insert(ite, std::make_pair(name, new unsigned char[1024]));
// code here
mCacheMutex.unlock();
return true;
}
private:
mutable std::map<std::string, unsigned char*> mCache;
mutable QMutex mCacheMutex;
};
int main(int argc, char *argv[])
{
std::cout << "Test" << std::endl;
TileCache cache;
#pragma omp parallel for shared(cache)
for(int i = 0; i < 2048; ++i)
{
cache.fillBuffer("my buffer");
}
return 0;
}
I was basically asking if there is any known reason to believe that the two ways are not equivalent, if always calling lock()
/unlock()
(no lock()
call without its matching unlock()
) can behave differently than QMutexLocker
, under some circumstances.
The advantage of a locker object is that automatically handles all exit points, including ones caused by exceptions.
Normally forgetting an exit point or an exception however just leaves the mutex locked and the symptom is the program just hanging.
If you are having a crash then the problem is somewhere else and the fact that you see the problem disappearing when using a locker object is just a coincidence (if there is a crash the program is for sure buggy, if there is no crash you cannot say the program is correct... especially in a language like C++ at has the concept of "undefined behavior").
Another non-obvious advantage is that using a locker object created as first statement you're guaranteed that unlocking will happen as last thing before returning to the caller. This can make a difference if there are other objects created inside the function that depend on the mutex:
void ok_func() {
Locker mylock(mymutex);
Obj myobj; // constructor and destructor require the lock
}
void buggy_func() {
lock(mymutex);
Obj myobj; // constructor and destructor require the lock
unlock(mymutex);
// Bug: myobj instance will be destroyed without the lock
}
void workaround_func() {
lock(mymutex);
{ // nested scope needed
Obj myobj; // constructor and destructor require the lock
}
unlock(mymutex);
}
From the documentation it seems that the QMutexLocker somewhat managed meaning that it will auto-magically unlock whenever you go out of scope. If your const member function will only have one possible return path, then either method is fine. However, if this function becomes more complex or if you change the design at a later point, I would just use the class.
Alright, found the problem after following the advice from @6502. First a small example that reproduces the problem:
class TileCache
{
struct Culprit
{
Culprit(int& n) : mN(n) { ++mN; }
~Culprit() { --mN; }
private:
int& mN;
};
public:
int& fillBuffer(const std::string& name) const {
//QMutexLocker lock(&mCacheMutex);
mCacheMutex.lock();
auto ite = mCache.find(name);
if(ite == mCache.end())
ite = mCache.insert(ite, std::make_pair(name, 0));
Culprit culprit(ite->second);
unsigned char somebuffer[1];
somebuffer[ (ite->second -1) * 8192 ] = 'Q';
mCacheMutex.unlock();
return ite->second;
}
private:
mutable std::map<std::string, int> mCache;
mutable QMutex mCacheMutex;
};
int main(int argc, char *argv[])
{
TileCache cache;
#pragma omp parallel for shared(cache) num_threads(2)
for(int i = 0; i < 2048; ++i)
{
int& n = cache.fillBuffer("my buffer");
if(n != 0) throw std::logic_error("Buff");
}
return 0;
}
The sole difference between using QMutexLocker and manually lock/unlock was that in the locker case the unlock method will be called when exiting scope, in the reverse order of creation for that scope. That is what I didn't see, some objects inside the fillBuffer method do also RIIA, so the end of the fillBuffer method should not be the end of the protected section. By using the mutex locker, the unlock() method is called last, protecting the whole method. It also works, of course, to have braces to delimit an inner scope where all the fillBuffer method will be doing its work:
int& fillBuffer(....) const {
mCacheMutex.lock();
{
auto ite = ....
...
}
mCacheMutex.unlock();
}
But the real function has some other return points, which prevent this solution from working.
TL;DR So I learnt that yes, it's not equivalent to have a scope based mutex locker compared with manual calls to lock() and unlock(), if the destructors of the objects created inside that scope should also be protected. In this case, the scoped lock will work, but the manual calls will not.
Many thanks to all that tried to help me, I appreciate it.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With