Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Does memory fragmentation slows down New/Malloc?

Short background:

I'm developing a system that should run for months and using dynamic allocations.

The question:

I've heard that memory fragmentation slows down new and malloc operators because they need to "find" a place in one of the "holes" I've left in the memory instead of simply "going forward" in the heap.

I've read the following question: What is memory fragmentation?

But none of the answers mentioned anything regarding performance, only failure allocating large memory chunks.

So does memory fragmentation make new take more time to allocate memory? If yes, by how much? How do I know if new is having a "Hard time" finding memory on the heap ?

I've tried to find what are the data structures/algorithms GCC uses to find a "hole" in the memory to allocate inside. But couldn't find any descent explanation.

like image 352
OopsUser Avatar asked Jun 26 '17 19:06

OopsUser


1 Answers

Memory allocation is platform specific, depending on the platform.

I would say "Yes, new takes time to allocate memory. How much time depends on many factors, such as algorithm, level of fragmentation, processor speed, optimizations, etc.

The best answer for how much time is taken, is to profile and measure. Write a simple program that fragments the memory, then measure the time for allocating memory.

There is no direct method for a program to find out the difficulty of finding available memory locations. You may be able to read a clock, allocate memory, then read again. Another idea is to set a timer.

Note: in many embedded systems, dynamic memory allocation is frowned upon. In critical systems, fragmentation can be the enemy. So fixed sized arrays are used. Fixed sized memory allocations (at compile time) remove fragmentation as an defect issue.

Edit 1: The Search
Usually, memory allocation requires a call to a function. The impact of the this is that the processor may have to reload its instruction cache or pipeline, consuming extra processing time. There also may be extra instruction for passing parameters such as the minimal size. Local variables and allocations at compile time usually don't need a function call for allocation.

Unless the allocation algorithm is linear (think array access), it will require steps to find an available slot. Some memory management algorithms use different strategies based on the requested size. For example, some memory managers may have separate pools for sizes of 64-bits or smaller.

If you think of a memory manager as having a linked list of blocks, the manager will need to find the first block greater than or equal in size to the request. If the block is larger than the requested size, it may be split and the left over memory is then created into a new block and added to the list.

There is no standard algorithm for memory management. They differ based on the needs of the system. Memory managers for platforms with restricted (small) sizes of memory will be different than those that have large amounts of memory. Memory allocation for critical systems may be different than those for non-critical systems. The C++ standard does not mandate the behavior of a memory manager, only some requirements. For example, the memory manager is allowed to allocate from a hard drive, or a network device.

The significance of the impact depends on the memory allocation algorithm. The best path is to measure the performance on your target platform.

like image 151
Thomas Matthews Avatar answered Oct 10 '22 16:10

Thomas Matthews