Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What does flushing thread local memory to global memory mean?

I am aware that the purpose of volatile variables in Java is that writes to such variables are immediately visible to other threads. I am also aware that one of the effects of a synchronized block is to flush thread-local memory to global memory.

I have never fully understood the references to 'thread-local' memory in this context. I understand that data which only exists on the stack is thread-local, but when talking about objects on the heap my understanding becomes hazy.

I was hoping that to get comments on the following points:

  1. When executing on a machine with multiple processors, does flushing thread-local memory simply refer to the flushing of the CPU cache into RAM?

  2. When executing on a uniprocessor machine, does this mean anything at all?

  3. If it is possible for the heap to have the same variable at two different memory locations (each accessed by a different thread), under what circumstances would this arise? What implications does this have to garbage collection? How aggressively do VMs do this kind of thing?

  4. (EDIT: adding question 4) What data is flushed when exiting a synchronized block? Is it everything that the thread has locally? Is it only writes that were made inside the synchronized block?

    Object x = goGetXFromHeap(); // x.f is 1 here    
    Object y = goGetYFromHeap(); // y.f is 11 here
    Object z = goGetZFromHead(); // z.f is 111 here
    
    y.f = 12;
    
    synchronized(x)
    {
        x.f = 2;
        z.f = 112;
    }
    
    // will only x be flushed on exit of the block? 
    // will the update to y get flushed?
    // will the update to z get flushed?
    

Overall, I think am trying to understand whether thread-local means memory that is physically accessible by only one CPU or if there is logical thread-local heap partitioning done by the VM?

Any links to presentations or documentation would be immensely helpful. I have spent time researching this, and although I have found lots of nice literature, I haven't been able to satisfy my curiosity regarding the different situations & definitions of thread-local memory.

Thanks very much.

like image 322
ares Avatar asked Mar 22 '10 20:03

ares


People also ask

What does it mean to flush memory?

Flushing: To sync the temporary state of your application data with the permanent state of the data (in a database, or on disk).

What does flush operation do?

OpenMP flush operations are used to enforce consistency between a thread's temporary view of memory and memory, or between multiple threads' view of memory. If a flush operation is a strong flush, it enforces consistency between a thread's temporary view and memory.


2 Answers

The flush you are talking about is known as a "memory barrier". It means that the CPU makes sure that what it sees of the RAM is also viewable from other CPU/cores. It implies two things:

  • The JIT compiler flushes the CPU registers. Normally, the code may kept a copy of some globally visible data (e.g. instance field contents) in CPU registers. Registers cannot be seen from other threads. Thus, half the work of synchronized is to make sure that no such cache is maintained.

  • The synchronized implementation also performs a memory barrier to make sure that all the changes to RAM from the current core are propagated to main RAM (or that at least all other cores are aware that this core has the latest values -- cache coherency protocols can be quite complex).

The second job is trivial on uniprocessor systems (I mean, systems with a single CPU which has as single core) but uniprocessor systems tend to become rarer nowadays.

As for thread-local heaps, this can theoretically be done, but it is usually not worth the effort because nothing tells what parts of the memory are to be flushed with a synchronized. This is a limitation of the threads-with-shared-memory model: all memory is supposed to be shared. At the first encountered synchronized, the JVM should then flush all its "thread-local heap objects" to the main RAM.

Yet recent JVM from Sun can perform an "escape analysis" in which a JVM succeeds in proving that some instances never become visible from other threads. This is typical of, for instance, StringBuilder instances created by javac to handle concatenation of strings. If the instance is never passed as parameter to other methods then it does not become "globally visible". This makes it eligible for a thread-local heap allocation, or even, under the right circumstances, for stack-based allocation. Note that in this situation there is no duplication; the instance is not in "two places at the same time". It is only that the JVM can keep the instance in a private place which does not incur the cost of a memory barrier.

like image 149
Thomas Pornin Avatar answered Sep 23 '22 05:09

Thomas Pornin


It is really an implementation detail if the current content of the memory of an object that is not synchronized is visible to another thread.

Certainly, there are limits, in that all memory is not kept in duplicate, and not all instructions are reordered, but the point is that the underlying JVM has the option if it finds it to be a more optimized way to do that.

The thing is that the heap is really "properly" stored in main memory, but accessing main memory is slow compared to access the CPU's cache or keeping the value in a register inside the CPU. By requiring that the value be written out to memory (which is what synchronization does, at least when the lock is released) it forcing the write to main memory. If the JVM is free to ignore that, it can gain performance.

In terms of what will happen on a one CPU system, multiple threads could still keep values in a cache or register, even while executing another thread. There is no guarantee that there is any scenario where a value is visible to another thread without synchronization, although it is obviously more likely. Outside of mobile devices, of course, the single-CPU is going the way of floppy disks, so this is not going to be a very relevant consideration for long.

For more reading, I recommend Java Concurrency in Practice. It is really a great practical book on the subject.

like image 25
Yishai Avatar answered Sep 21 '22 05:09

Yishai