It is not clear to me how mutex and lock is working.
I have one object (my_class) and I adding, deleting and read data from object in main thread. And in my second thread I want check some data from my object. Problem is, during reading data from second thread, it can lead to crash application when I delete object in main thread.
Therefore I created std::lock_guard<std::mutex> lock(mymutex)
inside my second thread.
I create test and with this lock_guard it never crash. But I don't know if I need use lock in main thread too.
Question is, what happens when second thread lock mutex and read the data and main thread wants delete the data from object but there is no lock? And otherwise what happens when second thread want to lock mutex and read data from object while main thread deleting data from object?
Forget about std::lock_guard
for a while. It's just convenience (a very useful one, but still just convenience). The synchronisation primitive is the mutex itself.
Mutex is an abbreviation of MUTual EXclusion. It's a synchronisation primitive which allows for one thread to exclude other threads' access to whatever is protected by a mutex. It's usually shared data, but it can be anything (a piece of code, for example).
In your case, you have data which is shared between two threads. To prevent potentially disastrous concurrent access, all accesses to that data must be protected by something. A mutex is a sensible thing to use for this.
So you conceptually bundle your data with a mutex, and whenever any code wants to access (read, modify, write, delete, ...) the data, it must lock the mutex first. Since no more that one thread can ever have a mutex locked at any one time, the data access will be synchronised properly and no race conditions can occur.
With the above, all code accessing the data would look like this:
mymutex.lock();
/* do whatever necessary with the shared data */
mymutex.unlock();
That is fine, as long as
lock
and unlock
calls, even in the presence of multiple return paths, andSince the above points are difficult to get right manually (they're a big maintenance burden), there's a way to automate them. That is the std::lock_guard
convenience we put aside at start. It's just a simple RAII class which calls lock()
on the mutex in its constructor and unlock()
in its destructor. With a lock guard, the code for accessing shared data will look like this:
{
std::lock_guard<std::mutex> g(mymutex);
/* do whatever necessary with the shared data */
}
This guarantees that the mutex will correctly be unlocked when the operation finishes, whether by one of potentially many return
(or other jump) statements, or by an exception.
std::lock_guard<<std::mutex>
is a short cut as mentioned above, but crucial for concurrent control flows, which you always have them when a mutex make sense, at all!
In case the protected block raises an exception, that is not treated inside block itself, the fragile pattern
mymutex.lock();
/* do anything but raising an exception here! */
mymutex.unlock();
will not unlock the mutex and some other control flow waiting for the mutex might be stuck in a dead lock.
The robust pattern
{
std::lock_guard<std::mutex> guard(mymutex);
/* do anything here! */
}
will anyway perform an unlock on mymutex
, when the block is left.
The other relevant use case is synchronized access to some attribute
int getAttribute()
{
std::lock_guard<std::mutex> guard(mymutex);
return attribute;
}
Here, without lock_guard, you need to assign the return value to some other variable, before you can unlock the mutex, which is two more steps and again does not handle exceptions.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With