Assuming:
In this scenario, do you need volatile or AtomicReference or anything like that?
This article states:
Memory barriers are not required if you adhere strictly to the single writer principle.
Which seems to suggest that in the case I'm describing, you really don't need to do anything special.
So, here's a test I ran with curious results:
import org.junit.Test;
public class ThreadTest {
int onlyWrittenByMain = 0;
int onlyWrittenByThread = 0;
@Test
public void testThread() throws InterruptedException {
Thread newThread = new Thread(new Runnable() {
@Override
public void run() {
do {
onlyWrittenByThread++;
} while (onlyWrittenByMain < 10 || onlyWrittenByThread < 10);
System.out.println("thread done");
}
});
newThread.start();
do {
onlyWrittenByMain++;
// Thread.yield();
// System.out.println("test");
// new Random().nextInt();
} while (onlyWrittenByThread < 10);
System.out.println("main done");
}
}
Sometimes running this will output "thread done" and then hang forever. Sometimes it does complete. So the thread sees the changes the main thread makes, but apparently main doesn't always see the change that the thread makes?
If I put the system out in, or the Thread.yield, or the random call, or make onlyWrittenByThread volatile, it completes every time (tried about 10+ times).
Does that mean that the blog post I reference above is incorrect? That you do have to have a memory barrier even in the single writer scenario?
No one quite answered this question so I think I'll guess that it's likely correct that a memory barrier is not required, but without something to create the happens-before relationship, the java compiler and hotspot can do optimizations (eg. hoisting) that will make it not do what one wants.
Concurrency and Parallelism Concurrency indicates that more than one thread is making progress, but the threads are not actually running simultaneously. The switching between threads happens quickly enough that the threads might appear to run simultaneously.
In a multithreaded process on a single processor, the processor can switch execution resources between threads, resulting in concurrent execution.
One way you can achieve asynchronous execution without running multiple threads is using command pattern and command queue. You can implement it in any programming language.
A multi-threaded program will take advantage of additional threads — and cores — to distribute the load of the program more efficiently, as opposed to have one poor core do all the work while the others simply watch. The premise of concurrency is to run two or more different programs, kind of at the same time.
The issue is caching on a multicore system - without something like volatile forcing the happens-before relationship (memory-barrier stuff) you could have your writer thread writing to a copy of the variable in cache on its core and all your reader threads reading another copy of the variable on another core. The other issue is atomicity, which another answer addresses.
The main problem in your code is not so much about what the CPU will do but what the JVM will do with it: you have a high risk of variable hoisting. What that means is that the JMM (Java Memory Model) allows a JVM to rewrite:
public void run() {
do {
onlyWrittenByThread++;
} while (onlyWrittenByMain < 10 || onlyWrittenByThread < 10);
System.out.println("thread done");
}
as this other piece of code (notice the local variables):
public void run() {
int localA = onlyWrittenByMain;
int localB = onlyWrittenByThread;
do {
localB ++;
} while (localA < 10 || localB < 10);
System.out.println("thread done");
}
It happens that this is a fairly common optimisation made by hotpost. In your case, once that optimisation is made (probably not straight away when you call that method but after a few milliseconds), whatever you do in other threads will never be visible from that thread.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With