I know that the JVM memory model is made for lowest common denominator of CPUs, so it has to assume the weakest possible model of a cpu on which the JVM can run (eg ARM).
Now, considering that x64 has a fairly strong memory model, what synchronization practices can I ignore assuming I know my program will only run on 64bit x86 CPUs? Also does this apply when my program is being run through virtualization?
Example:
It is known that JVM's memory model requires synchronizing read/writes access to longs and doubles but one can assume that read/writes of other 32 bit primitives like int, float etc are atomic.
However, if i know that I am running on a 64 bit x86 machine, can i ignore using locks on longs/doubles knowing that the cpu will atomically read/write 64 bit values and just keep them volatile (like i would with ints/floats)?
I know that the JVM memory model is made for lowest common denominator of CPUs, so it has to assume the weakest possible model of a cpu on which the JVM can run (eg ARM).
That's not correct. The JMM resulted from a compromise among a variety of competing forces: the desire for a weaker memory model so that programs can go faster on hardware that have weak memory models; the desire of compiler writers who want certain optimizations to be allowed; and the desire for the result of parallel Java programs to be correct and predictable, and if possible(!) understandable to Java programmers. See Sarita Adve's CACM article for a general overview of memory model issues.
Considering that x64 has a fairly strong memory model, what synchronization practices can I ignore assuming I know my program will only run on [x64] CPUs?
None. The issue is that the memory model applies not only to the underlying hardware, but it also applies to the JVM that's executing your program, and mostly in practice, the JVM's JIT compiler. The compiler might decide to apply certain optimizations that are allowed within the memory model, but if your program is making unwarranted assumptions about the memory behavior based on the underlying hardware, your program will break.
You asked about x64 and atomic 64-bit writes. It may be that no word tearing will ever occur on an x64 machine. I doubt that any JIT compiler would tear a 64-bit value into 32-bit writes as an optimization, but you never know. However, it seems unlikely that you could use this feature to avoid synchronization or volatile fields in your program. Without these, writes to these variables might never become visible to other threads, or they could arbitrarily be re-ordered with respect to other writes, possibly leading to bugs in your program.
My advice is first to apply synchronization properly to get your program correct. You might be pleasantly surprised. The synchronization operations have been heavily optimized and can be very fast in the common case. If you find there are bottlenecks, consider using optimizations like lock splitting, the use of volatiles, or converting to non-blocking algorithms.
UPDATE
The OP has updated the question to be a bit more specific about using volatile
instead of locks and synchronization.
It turns out that volatile
not only has memory visibility semantics. It also makes long
and double
access atomic, which is not the case for non-volatile
variables of those types. See the JLS section 17.7. You should be able to rely on volatile
to provide atomicity on any hardware, not just x64.
While I'm at it, for additional information about the Java Memory Model, see Aleksey Shipilev's JMM Pragmatics talk transcript. (Aleksey is also the JMH guy.) There's lots of detail in this talk, and some interesting exercises to test one's understanding. One overall takeaway of the talk is that it's often a mistake to rely on one's intuition about how the memory model works, e.g. in terms of cache lines or write buffers. The JMM is a formalism about memory operations and various contraints (synchronizes-with, happens-before, etc.) that determine ordering of those operations. This can have quite counterintuitive results. It's unwise to try to outsmart the JMM by thinking about specific hardware properties. It'll come back to bite you.
you would still need to handle thread-safety, so volatility semantics and memory fences will still matter
What I mean here is, eg in Oracle Java, most low-level sync operations end up in Unsafe (docjar.com/docs/api/sun/misc/Unsafe.html#getUnsafe), which in turn has a long list of native methods. So in the end, those synchronization practices and lots of other low-level operations are encapsuled by the JVM where they belong. x64 has not the same jvm as x86.
after reading your edited question again: the atomicity of your load/store operations was a topic here. So no, you don't have to worry about atomic 64bit load/stores on x64. But since this is not the end of all sync issues, see the other answers.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With