I have encountered the following code when reading Vavr's Lazy
source code:
private transient volatile Supplier<? extends T> supplier;
private T value; // will behave as a volatile in reality, because a supplier volatile read will update all fields (see https://www.cs.umd.edu/~pugh/java/memoryModel/jsr-133-faq.html#volatile)
public T get() {
return (supplier == null) ? value : computeValue();
}
private synchronized T computeValue() {
final Supplier<? extends T> s = supplier;
if (s != null) {
value = s.get();
supplier = null;
}
return value;
}
This is a famous pattern called "Double-checked locking", but it looks broken for me. Suppose we embed this code in a multithreaded environment. If the get()
method is called by the first thread and the supplier construct a new object, (for me) there is a possibility for another thread to see the semi-constructed object due to reordering of the following code:
private synchronized T computeValue() {
final Supplier<? extends T> s = supplier;
if (s != null) {
// value = s.get(); suppose supplier = () -> new AnyClass(x, y , z)
temp = calloc(sizeof(SomeClass));
temp.<init>(x, y, z);
value = temp; //this can be reordered with line above
supplier = null;
}
return value;
}
Unfortunately, there is a comment next to the value
field:
private transient volatile Supplier<? extends T> supplier;
private T value; // will behave as a volatile in reality, because a supplier volatile read will update all fields (see https://www.cs.umd.edu/~pugh/java/memoryModel/jsr-133-faq.html#volatile)
As far as I know, a volatile read will "refresh" values of variables that are read after a volatile read. In other words - it drains the cache's invalidation queue (if we talk about MESI coherence protocol). Additionally, it prevents loads/reads that happen after the volatile read to be reordered before it. Still, it doesn't give any guarantee that instructions in object creation (calling supplier get()
method) will not be reordered.
My question is - why this code is considered thread-safe?
Obviously, I haven't found anything interesting in the source placed in the comment.
Don’t talk about caches when it comes to Java’s memory model. What matters, are formal happens-before relationships.
Note that computeValue()
is declared synchronized
, so for threads executing the method, reorderings within the method are irrelevant, as they can only enter the method when any thread which executed the method before, has exited the method and there is a happens-before relationship between the method exit of the previous threads and the next thread entering the method.
The really interesting method is
public T get() {
return (supplier == null) ? value : computeValue();
}
which does not use synchronized
but relies on the volatile
read of supplier
. This is obviously assuming that the initial state of supplier
is non-null
, e.g. assigned in the constructor and the surrounding code ensured that the get
method can not get executed before this assignment happened.
Then, when supplier
is read as null
, it can only be the result of the write, the first thread executing computeValue()
has done, which establishes a happens-before relationship between the writes that thread made before assigning null
to supplier
and the reads this thread makes after reading null
from supplier
. So it will perceive a completely initialized state of the object referenced by value
.
So you are right that the things happening in the value’s constructor can be reordered with the assignment of the value
reference, but they can not be reordered with the subsequent write to supplier
, which the get
method is relying on.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With