In the past, when I've worked on long-running C++ daemons I've had to deal with heap fragmentation issues. Tricks like keeping a pool of my large allocations were necessary to keep from running out of contiguous heap space.
Is this still an issue with a 64 bit address space? Perf is not a concern for me, so I would prefer to simplify my code and not deal with things like buffer pools anymore. Does anyone have any experience or stories about this issue? I'm using Linux, but I imagine many of the same issues apply to Windows.
Heap fragmentation is a state in which available memory is broken into small, noncontiguous blocks. When a heap is fragmented, memory allocation can fail even when the total available memory in the heap is enough to satisfy a request, because no single block of memory is large enough.
Fragmentation occurs when a user program has allocated memory, but doesn't use it. An example is in the heap from Malloc lecture #2, when malloc() has 8130 bytes in its free list. This is memory that the program has allocated from the operating system, but is not using.
If you can isolate exactly those places where you're likely to allocate large blocks, you can (on Windows) directly call VirtualAlloc instead of going through the memory manager. This will avoid fragmentation within the normal memory manager.
Reducing the number of sizes between these extremes also helps. Employing sizes that increase logarithmically saves a lot of fragmentation. For example, each size could be 20% larger than the previous size. “One size fits all” might not be true for memory allocators in embedded system.
Is this still an issue with a 64 bit address space?
No, it is not still an issue.
You are correct that it was an issue on 32-bit systems, but it no longer is an issue on 64-bit systems.
The virtual address space is so large on 64-bit systems (2^48 bytes at the moment on todays x86_64 processors, and set to increase gradually to 2^64 as new x86_64 processors come out), that running out of contiguous virtual address space due to fragmentation is practically impossible (for all but some highly contrived corner cases).
(It is a common error of intuition caused by the fact that 64 is "only" double 32, that causes people to think that a 64-bit address space is somehow roughly double a 32-bit one. In fact a full 64-bit address space is 4 billion times as big as a 32-bit address space.)
Put another way if it took your 32-bit daemon one week to fragment to a stage where it couldn't allocate an x byte block, than it would take at minimum one thousand years to fragment today's x86_64 processors 48-bit address spaces, and it would take 80 million years to fragment the future planned full 64-bit address space.
Heap fragmentation is just as much of an issue under 64 bit as under 32 bit. If you make lots of requests with varying lifetimes, then you are going to get a fragmented heap. Unfortunately, 64 bit operating systems don't really help with this, as they still can't really shuffle the small bits of free memory around to make larger contiguous blocks.
If you want to deal with heap fragmentation, you still have to use the same old tricks.
The only way that a 64 bit OS could help here is if there is some amount of memory that is 'large enough' that you would never fragment it.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With