Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Linux heap - is doing a ton of new/deletes okay or does the heap become badly fragmented?

I'm not familiar with how the Linux heap is allocated.

I'm calling malloc()/free() many many times a second, always with the same sizes (there are about 10 structs, each fixed size). Aside from init time, none of my memory remains allocated for long periods of time.

Is this considered poor form with the standard heaps? (I'm sure someone will ask 'what heap are you using?' - 'Ugh. The standard static heap' ..meaning I'm unsure.)

Should I instead use a free list or does the heap tolerate lots of the same allocations. I'm trying to balance readability with performance.

Any tools to help me measure?

like image 702
stuck Avatar asked Aug 27 '11 16:08

stuck


People also ask

What causes heap fragmentation?

Here is the problem: every time the server returns a different response, the sizes of the blocks change. As we saw, allocation of varying sizes creates holes in the heap, which increases the fragmentation.

What is heap fragmentation?

Heap fragmentation is a state in which available memory is broken into small, noncontiguous blocks. When a heap is fragmented, memory allocation can fail even when the total available memory in the heap is enough to satisfy a request, because no single block of memory is large enough.

Is heap memory fixed?

Heap memory is not fixed, and it can grow and shrink.

Does the heap have a limit?

The default startup heap size is 1.5 GB. This value must be a number between 1.5 GB and the maximum amount of memory allowed by your operating system and JVM version. Consider the following examples: If you have a Windows system with a 32-bit JVM, then a process can have a maximum heap size of 2 GB.


1 Answers

First of all, unless you have measured a problem with memory usage blowing up, don't even think about using a custom allocator. It's one of the worst forms of premature optimization.

At the same time, even if you do have a problem, a better solution than a custom allocator would be figuring out why you're allocating and freeing objects so much, and fixing the design issue that's causing it.

To address your specific question, glibc's allocator is based on the dlmalloc algorithm, which is near-optimal when it comes to fragmentation. The only way you'll get it to badly fragment memory is the unavoidable way: by allocating objects with radically different lifetimes in alternation, e.g. allocating a large number of objects but only freeing every other one. I think you'll have a hard time working out an allocation pattern that will give worse total memory usage than pools...

like image 89
R.. GitHub STOP HELPING ICE Avatar answered Nov 02 '22 11:11

R.. GitHub STOP HELPING ICE