Let's assume this is about to happen in a true parallel environment, one VM, at the same time:
// Thread 1:
new Cat()
// Thread 2:
new Dog()
// Thread 3:
new Mouse()
How does JVM ensure thread safety of memory allocations on heap?
Heap is one for all threads and it has its own internal data.
For simplicity assume a simple compacting garbage collector implementation, -XX:+UseSerialGC -XX:+UseParallelGC, with simple incremental pointer to mark a start of free space and one continuous free space in Eden (heap).
There must be some kind of synchronization between threads when heap space is allocated for Cat, Dog and Mouse instances otherwise they can easily end up overwriting each other. Does that mean that every new operator hides inside some synchronized blocks? This way, many "lock free" algorithms are not in fact completely lock free ;)
I assume that memory allocations are made by the application thread themselves, synchronously, not by another dedicated thread(s).
I am aware of TLABs, or Thread Local Allocation Buffer. They allow threads to have a separate memory areas in Eden for allocations, so no synchronization is required. But I am not sure if TLAB is set by default, it is somewhat very obscure HotSpot feature. Note: do not confuse TLAB and ThreadLocal
variables!
I also assume, that with more complex garbage collectors, like G1, or non-compacting garbage collectors, more complex heap structure data has to be maintained, like list of free blocks for CMS, so more synchronization is needed.
UPDATE: Please let me clarify this. I accept answer for HotSpot JVM implementation and variants with and without active TLAB.
UPDATE: According to my quick test, TLAB are set ON by default, on my 64-bit JDK 7, for Serial, Parallel and CMS garbage collectors, but not for G1 GC.
Using Atomic Variable Using an atomic variable is another way to achieve thread-safety in java. When variables are shared by multiple threads, the atomic variable ensures that threads don't crash into each other.
JVMs allocate memory on an as needed basis from the operating system. Generally, when the JVM starts, it will allocate the minimum memory allocated (Xms) to the application that is running. As the application requires more memory, it will allocate blocks of memory until the maximum allocation (Xmx) has been reach.
To make these classes thread-safe, you must prevent concurrent access to the internal state of an instance by more than one thread. Because Java was designed with threads in mind, the language provides the synchronized modifier, which does just that.
Java Memory model includes Heap and Stack. Data that is storing in the Heap can be shared between threads, which means it is not a thread-safe data. Data stored in a Stack memory is not shared between the threads, which means it is a thread-safe data.
I've briefly described the allocation procedure in HotSpot JVM in this answer.
The way how an object is allocated depends on the area of the heap where it is allocated.
TLABs are the areas in Eden reserved for thread-local allocations. Each thread may create many TLABs: as soon as one gets filled, a new TLAB is created using the technique described in #2. I.e. creating of a new TLAB is something like allocating a large metaobject directly in Eden.
Each Java thread has two pointers: tlab_top
and tlab_limit
. An allocation in TLAB is just a pointer increment. No synchronization is needed since the pointers are thread-local.
if (tlab_top + object_size <= tlab_limit) {
new_object_address = tlab_top;
tlab_top += object_size;
}
-XX:+UseTLAB
is enabled by default. If turn it off, the objects will be allocated in Eden as described below.
If there is not enough space in TLAB for a new object, either a new TLAB is created or the object is allocated directly in Eden (depending on TLAB waste limit and other ergonomics parameters).
Allocation in Eden is similar to allocation in TLAB. There are also two pointers: eden_top
and eden_end
, they are global for the whole JVM. The allocation is also a pointer increment, but atomic operations are used since the Eden space is shared between all threads. Thread-safety is achieved by using architecture-specific atomic instructions: CAS (e.g. LOCK CMPXCHG
on x86) or LL/SC (on ARM).
This depends on GC algorithm, e.g. CMS uses free lists. Allocation in Old Generation is typically performed only by Garbage Collector itself, so it knows how to synchronize its own threads (generally with the mix of allocation buffers, lock-free atomic operations and mutexes).
This isn't specified in the Java specification. This means each JVM can do it however it wants to as long as it works and follows Java's memory guarantees.
A good guess on how this works with a moving GC, would be each thread gets its own allocation zone. Where it does a simple pointer increase when allocating objects. Very simple and very quick allocation with no locking. When that is full either it gets a new allocation zone allocated to it, or the GC moves all live objects to a continuous part of the heap and returns the now empty zones back to each thread. I am not sure if this is how it is actually implemented in any JVM, and it would be complicated with GC synchronization.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With