Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Do threads have a distinct heap?

People also ask

Do threads share the same heap?

It is important to distinguish between these two types of process memory because each thread will have its own stack, but all the threads in a process will share the heap.

Can two threads share the same heap?

No. There is a single heap that is shared by all threads in the Java process.

Is thread stack part of heap?

Stack Memory in Java is used for static memory allocation and the execution of a thread. It contains primitive values that are specific to a method and references to objects referred from the method that are in a heap. Access to this memory is in Last-In-First-Out (LIFO) order.

Do threads share heap variables?

The answer is simple: all threads in C share the same address space. This means all memory, the heap included, is shared between all threads.


No. All threads share a common heap.

Each thread has a private stack, which it can quickly add and remove items from. This makes stack based memory fast, but if you use too much stack memory, as occurs in infinite recursion, you will get a stack overflow.

Since all threads share the same heap, access to the allocator/deallocator must be synchronized. There are various methods and libraries for avoiding allocator contention.

Some languages allow you to create private pools of memory, or individual heaps, which you can assign to a single thread.


By default, C has only a single heap.

That said, some allocators that are thread aware will partition the heap so that each thread has it's own area to allocate from. The idea is that this should make the heap scale better.

One example of such a heap is Hoard.


Depends on the OS. The standard c runtime on windows and unices uses a shared heap across threads. This means locking every malloc/free.

On Symbian, for example, each thread comes with its own heap, although threads can share pointers to data allocated in any heap. Symbian's design is better in my opinion since it not only eliminates the need for locking during alloc/free, but also encourages clean specification of data ownership among threads. Also in that case when a thread dies, it takes all the objects it allocated along with it - i.e. it cannot leak objects that it has allocated, which is an important property to have in mobile devices with constrained memory.

Erlang also follows a similar design where a "process" acts as a unit of garbage collection. All data is communicated between processes by copying, except for binary blobs which are reference counted (I think).


Each thread has its own stack and call stack.

Each thread shares the same heap.


It depends on what exactly you mean when saying "heap".

All threads share the address space, so heap-allocated objects are accessible from all threads. Technically, stacks are shared as well in this sense, i.e. nothing prevents you from accessing other thread's stack (though it would almost never make any sense to do so).

On the other hand, there are heap structures used to allocate memory. That is where all the bookkeeping for heap memory allocation is done. These structures are sophisticatedly organized to minimize contention between the threads - so some threads might share a heap structure (an arena), and some might use distinct arenas.
See the following thread for an excellent explanation of the details: How does malloc work in a multithreaded environment?


Typically, threads share the heap and other resources, however there are thread-like constructions that don't. Among these thread-like constructions are Erlang's lightweight processes, and UNIX's full-on processes (created with a call to fork()). You might also be working on multi-machine concurrency, in which case your inter-thread communication options are considerably more limited.