Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Multithreaded Heap Management

In C/C++ I can allocate memory in one thread and delete it in another thread. Yet whenever one requests memory from the heap, the heap allocator needs to walk the heap to find a suitably sized free area. How can two threads access the same heap efficiently without corrupting the heap? (Is this done by locking the heap?)

like image 349
doron Avatar asked Jan 28 '10 21:01

doron


People also ask

Do multiple threads share the same heap?

Multiple threads within one process share heap storage, static storage, and code. Each thread has its own registers and stack. Threads share the same single address space and synchronization is needed when threads access same memory locations.

Does multithreading increase memory usage?

Multi-threading gets around requiring additional memory because it relies on a shared memory between threads. Shared memory removes the additional memory overhead but still incurs the penalty of increased context switching.

What is heap in process management?

The heap is an area of dynamically-allocated memory that is managed automatically by the operating system or the memory manager library. Memory on the heap is allocated, deallocated, and resized regularly during program execution, and this can lead to a problem called fragmentation.

Is heap memory shared by threads?

We talked about the two types of memory available to a process or a thread, the stack and the heap. It is important to distinguish between these two types of process memory because each thread will have its own stack, but all the threads in a process will share the heap.


3 Answers

In general, you do not need to worry about the thread-safety of your memory allocator. All standard memory allocators -- that is, those shipped with MacOS, Windows, Linux, etc. -- are thread-safe. Locks are a standard way of providing thread-safety, though it is possible to write a memory allocator that only uses atomic operations rather than locks.

Now it is an entirely different question whether those memory allocators scale; that is, is their performance independent of the number of threads performing memory operations? In most cases, the answer is no; they either slow down or can consume a lot more memory. The first scalable allocator in both dimensions (speed and space) is Hoard (which I wrote); the Mac OS X allocator is inspired by it -- and cites it in the documentation -- but Hoard is faster. There are others, including Google's tcmalloc.

like image 123
EmeryBerger Avatar answered Sep 20 '22 02:09

EmeryBerger


Yes an "ordinary" heap implementation supporting multithreaded code will necessarily include some sort of locking to ensure correct operation. Under fairly extreme conditions (a lot of heap activity) this can become a bottleneck; more specialized heaps (generally providing some sort of thread-local heap) are available which can help in this situation. I've used Intel TBB's "scalable allocator" to good effect. tcmalloc and jemalloc are other examples of mallocs implemented with multithreaded scaling in mind.

Some timing comparisons comparisons between single threaded and multithread-aware mallocs here.

like image 3
timday Avatar answered Sep 23 '22 02:09

timday


This is an Operating Systems question, so the answer is going to depend on the OS.

On Windows, each process gets its own heap. That means multiple threads in the same process are (by default) sharing a heap. Thus the OS has to thread-synchronize its allocation and deallocation calls to prevent heap corruption. If you don't like the idea of the possible contention that may ensue, you can get around it by using the Heap* routines. You can even overload malloc (in C) and new (in C++) to call them.

like image 3
T.E.D. Avatar answered Sep 22 '22 02:09

T.E.D.