The java meomry model mandates that synchronize
blocks that synchronize on the same monitor enforce a before-after-realtion on the variables modified within those blocks. Example:
// in thread A
synchronized( lock )
{
x = true;
}
// in thread B
synchronized( lock )
{
System.out.println( x );
}
In this case it is garanteed that thread B will see x==true
as long as thread A already passed that synchronized
-block. Now I am in the process to rewrite lots of code to use the more flexible (and said to be faster) locks in java.util.concurrent
, especially the ReentrantReadWriteLock
. So the example looks like this:
EDIT: The example was broken, because I incorrectly transformed the code, as noted by matt b. Fixed as follows:
// in thread A
lock.writeLock().lock();
{
x = true;
}
lock.writeLock().unlock();
// in thread B
lock.readLock().lock();
{
System.out.println( x );
}
lock.readLock().unlock();
However, I have not seen any hints within the memory model specification that such locks also imply the nessessary ordering. Looking into the implementation it seems to rely on the access to volatile variables inside AbstractQueuedSynchronizer
(for the sun implementation at least). However this is not part of any specification and moreover access to non-volatile variables is not really condsidered covered by the memory barrier given by these variables, is it?
So, here are my questions:
synchronized
blocks?Regards, Steffen
--
Comment to Yanamon:
Look at the following code:
// in thread a
x = 1;
synchronized ( a ) { y = 2; }
z = 3;
// in thread b
System.out.println( x );
synchronized ( a ) { System.out.println( y ); }
System.out.println( z );
From what I understood, the memory barrier enforces the second output to show 2, but has no guaranteed affect on the other variables...? So how can this be compared to accessing a volatile variable?
Instruction reordering is allowed for the Java VM and the CPU as long as the semantics of the program do not change. The end result has to be the same as if the instructions were executed in the exact order they are listed in the source code.
with locks, you can release and acquire the locks in any order. with synchronized, you can release the locks only in the order it was acquired.
A java.util.concurrent.locks.Lock interface is used to as a thread synchronization mechanism similar to synchronized blocks. New Locking mechanism is more flexible and provides more options than a synchronized block. Main differences between a Lock and a synchronized block are following −
Simply put, a lock is a more flexible and sophisticated thread synchronization mechanism than the standard synchronized block. The Lock interface has been around since Java 1.5. It's defined inside the java. util. concurrent.
From the API-doc:
All Lock implementations must enforce the same memory synchronization semantics as provided by the built-in monitor lock, as described in The Java Language Specification, Third Edition (17.4 Memory Model):
* A successful lock operation has the same memory synchronization effects as a successful Lock action. * A successful unlock operation has the same memory synchronization effects as a successful Unlock action.
Unsuccessful locking and unlocking operations, and reentrant locking/unlocking operations, do not require any memory synchronization effects.
Beyond the question of what the semantics of the memory model guarantees, I think there are a few problems with the code you are posting.
Lock
implementation, you don't have need to use the synchronized
block.Lock
is to do so in a try-finally block to prevent accidental unlocking of the lock (since the lock is not automatically released when entering whatever block you are in, as with the synchronized
block).You should be using a Lock
with something resembling:
lock.lock();
try {
//do stuff
}
finally {
lock.unlock();
}
Reading and writing volatile variables now enforces happens before and happens after operation ordering. Writing to a volatile variable has the same effect as releasing a monitor and reading a variable has the effect as acquiring a monitor. The following example makes it a little more clear:
volatile boolean memoryBarrier = false;
int unguardedValue = 0;
//thread a:
unguardedValue = 10;
memoryBarrier = true;
// thread b
if (memoryBarrier) {
// unguardedValue is guaranteed to be read as 10;
}
But that all being said the sample code you provided did not look like it was really using the ReentrantLock
as it was designed to be used.
Lock
with the the Java's built in syncronized
keyword effectively makes access to the lock already single threaded so it doesn't give the Lock
a chance to do any real work.Lock
should be done following the pattern below, this is outlined in the java docs of Lock
lock.readLock().lock();
try {
// Do work
} finally {
lock.readLock.unlock();
}
Yanamon, I am not sure you are correct - but for different reasons than the argument you are making.
The unguardedVariable variable may be re-ordered in thread "a" such that its value is set to 10 after memoryBarrier is set to true.
"There is no guarantee that operations in one thread will be performed in the order given by the program, as long as the reordering is not detectable within that thread - even if the reordering is apparent to other threads"
Java Concurrency in Practise, Brian Goetz, p34
UPDATED: what I said is true in the case of the old memory model. So, if you want write-once-run-anywhere then my argument stands. However, in the new memory model, it is not the case as the semantics surrounding re-ordering of non-volatile variables in the presence of volatile access has become stricter (see http://www.cs.umd.edu/~pugh/java/memoryModel/jsr-133-faq.html#volatile).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With