I was reading through a Boost Mutex tutorial on drdobbs.com, and found this piece of code:
#include <boost/thread/thread.hpp>
#include <boost/thread/mutex.hpp>
#include <boost/bind.hpp>
#include <iostream>
boost::mutex io_mutex;
void count(int id)
{
for (int i = 0; i < 10; ++i)
{
boost::mutex::scoped_lock
lock(io_mutex);
std::cout << id << ": " <<
i << std::endl;
}
}
int main(int argc, char* argv[])
{
boost::thread thrd1(
boost::bind(&count, 1));
boost::thread thrd2(
boost::bind(&count, 2));
thrd1.join();
thrd2.join();
return 0;
}
Now I understand the point of a Mutex is to prevent two threads from accessing the same resource at the same time, but I don't see the correlation between io_mutex and std::cout. Does this code just lock everything within the scope until the scope is finished?
Description. scoped_lock is meant to carry out the tasks for locking, unlocking, try-locking and timed-locking (recursive or not) for the Mutex. The Mutex need not supply all of this functionality. If the client of scoped_lock<Mutex> does not use functionality which the Mutex does not supply, no harm is done.
A mutex object facilitates protection against data races and allows thread-safe synchronization of data between threads. A thread obtains ownership of a mutex object by calling one of the lock functions and relinquishes ownership by calling the corresponding unlock function.
class scoped_lock; (since C++17) The class scoped_lock is a mutex wrapper that provides a convenient RAII-style mechanism for owning one or more mutexes for the duration of a scoped block.
Scoped locks work by locking a mutex when they are constructed, and unlocking it when they are destructed. The C++ rules guarantee that when control flow leaves a scope (even via an exception), objects local to the scope being exited are destructed correctly.
Now I understand the point of a Mutex is to prevent two threads from accessing the same resource at the same time, but I don't see the correlation between io_mutex and std::cout.
std::cout
is a global object, so you can see that as a shared resource. If you access it concurrently from several threads, those accesses must be synchronized somehow, to avoid data races and undefined behavior.
Perhaps it will be easier for you to notice that concurrent access occurs by considering that:
std::cout << x
Is actually equivalent to:
::operator << (std::cout, x)
Which means you are calling a function that operates on the std::cout
object, and you are doing so from different threads at the same time. std::cout
must be protected somehow. But that's not the only reason why the scoped_lock
is there (keep reading).
Does this code just lock everything within the scope until the scope is finished?
Yes, it locks io_mutex
until the lock object itself goes out of scope (being a typical RAII wrapper), which happens at the end of each iteration of your for loop.
Why is it needed? Well, although in C++11 individual insertions into cout
are guaranteed to be thread-safe, subsequent, separate insertions may be interleaved when several threads are outputting something.
Keep in mind that each insertion through operator <<
is a separate function call, as if you were doing:
std::cout << id;
std::cout << ": ";
std::cout << i;
std::cout << endl;
The fact that operator <<
returns the stream object allows you to chain the above function calls in a single expression (as you have done in your program), but the fact that you are having several separate function calls still holds.
Now looking at the above snippet, it is more evident that the purpose of this scoped lock is to make sure that each message of the form:
<id> ": " <index> <endl>
Gets printed without its parts being interleaved with parts from other messages.
Also, in C++03 (where insertions into cout
are not guaranteed to be thread-safe) , the lock will protect the cout
object itself from being accessed concurrently.
A mutex has nothing to do with anything else in the program (except a conditional variable), at least at a higher level. A mutex has two effeccts: it controls program flow, and prevents multiple threads from executing the same block of code simultaneously. It also ensures memory synchronization. The important issue here, is that mutexes aren't associated with resources, and don't prevent two threads from accessing the same resource at the same time. A mutex defines a critical section of code, which can only be entered by one thread at a time. If all of the use of a particular resource is done in critical sections controled by the same mutex, then the resource is effectively protected by the mutex. But the relationship is established by the coder, by ensuring that all use does take place in the critical sections.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With