There's a new experimental feature (probably C++20), which is the "synchronized block". The block provides a global lock on a section of code. The following is an example from cppreference.
#include <iostream>
#include <vector>
#include <thread>
int f()
{
static int i = 0;
synchronized {
std::cout << i << " -> ";
++i;
std::cout << i << '\n';
return i;
}
}
int main()
{
std::vector<std::thread> v(10);
for(auto& t: v)
t = std::thread([]{ for(int n = 0; n < 10; ++n) f(); });
for(auto& t: v)
t.join();
}
I feel it's superfluous. Is there any difference between the a synchronized block from above, and this one:
std::mutex m;
int f()
{
static int i = 0;
std::lock_guard<std::mutex> lg(m);
std::cout << i << " -> ";
++i;
std::cout << i << '\n';
return i;
}
The only advantage I find here is that I'm saved the trouble of having a global lock. Is there more advantages of using a synchronized block? When should it be preferred?
synchronized block has better performance as only the critical section is locked but synchronized method has poor performance than block. synchronized block provide granular control over lock but synchronized method lock either on current object represented by this or class level lock.
A Synchronized block is a piece of code that can be used to perform synchronization on any specific resource of the method. A Synchronized block is used to lock an object for any shared resource and the scope of a synchronized block is smaller than the synchronized method.
The main difference is that if you use a synchronized block you may lock on an object other than this which allows to be much more flexible.
A synchronized statement can be used to acquire a lock on any object, not just this object, when executing a block of the code in a method. This block is referred to as a synchronized block.
1 Synchronized block is used to lock an object for any shared resource. 2 Scope of synchronized block is smaller than the method. 3 A Java synchronized block doesn't allow more than one JVM, to provide access control to a shared resource. 4 The system performance may degrade because of the slower working of synchronized keyword. More items...
If there are 100 lines of code (LOC) and synchronization has to be done for only 10 lines, then a synchronized block can be used. Synchronized can be used as keyword, method and blocks.
Java program to implement synchronized blocks with the help of using anonymous class. In this program also, two threads t1 and t2, are used where each of them has a method printTestsmple that calls the synchronized method. The thread 1 input for printTestsmple is 10, and the thread 2 input is 200.
Suppose we have 50 lines of code in our method, but we want to synchronize only 5 lines, in such cases, we can use synchronized block. If we put all the codes of the method in the synchronized block, it will work same as the synchronized method. Synchronized block is used to lock an object for any shared resource.
On the face of it, the synchronized
keyword is similar to std::mutex
functionally, but by introducing a new keyword and associated semantics (such the block enclosing the synchronized region) it makes it much easier to optimize these regions for transactional memory.
In particular, std::mutex
and friends are in principle more or less opaque to the compiler, while synchronized
has explicit semantics. The compiler can't be sure what the standard library std::mutex
does and would have a hard time transforming it to use TM. A C++ compiler would be expected to work correctly when the standard library implementation of std::mutex
is changed, and so can't make many assumptions about the behavior.
In addition, without an explicit scope provided by the block that is required for synchronized
, it is hard for the compiler to reason about the extent of the block - it seems easy in simple cases such as a single scoped lock_guard
, but there are plenty of complex cases such as if the lock escapes the function at which point the compiler never really knows where it could be unlocked.
Locks do not compose well in general. Consider:
//
// includes and using, omitted to simplify the example
//
void move_money_from(Cash amount, BankAccount &a, BankAccount &b) {
//
// suppose a mutex m within BankAccount, exposed as public
// for the sake of simplicity
//
lock_guard<mutex> lckA { a.m };
lock_guard<mutex> lckB { b.m };
// oversimplified transaction, obviously
if (a.withdraw(amount))
b.deposit(amount);
}
int main() {
BankAccount acc0{/* ... */};
BankAccount acc1{/* ... */};
thread th0 { [&] {
// ...
move_money_from(Cash{ 10'000 }, acc0, acc1);
// ...
} };
thread th1 { [&] {
// ...
move_money_from(Cash{ 5'000 }, acc1, acc0);
// ...
} };
// ...
th0.join();
th1.join();
}
In this case, the fact that th0
, by moving money from acc0
to acc1
, is
trying to take acc0.m
first, acc1.m
second, whereas th1
, by moving money from acc1
to acc0
, is trying to take acc1.m
first, acc0.m
second could make them deadlock.
This example is oversimplified, and could be solved by using std::lock()
or a C++17 variadic lock_guard
-equivalent, but think of the general case
where one is using third party software, not knowing where locks are being
taken or freed. In real-life situations, synchronization through locks gets
tricky really fast.
The transactional memory features aim to offer synchronization that composes
better than locks; it's an optimization feature of sorts, depending on context, but it's also a safety feature. Rewriting move_money_from()
as follows:
void move_money_from(Cash amount, BankAccount &a, BankAccount &b) {
synchronized {
// oversimplified transaction, obviously
if (a.withdraw(amount))
b.deposit(amount);
}
}
... one gets the benefits of the transaction being done as a whole or not at
all, without burdening BankAccount
with a mutex and without risking deadlocks due to conflicting requests from user code.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With