Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

A "killer adversary" for memory allocators?

After reading this question about seemingly degenerate behavior for the Windows memory allocator, and remembering back to this paper about constructing worst-case inputs to quicksort implementations, I started wondering: would it be possible to build a program that, given a black-box memory allocator, forces that allocator to fail an allocation request even when sufficient memory is still available in the system? That is, is it possible to take a black-box memory allocator and force it to fail?

I know that this can probably be done by allocating and freeing memory in a checkerboard pattern to force massive fragmentation, so in my mind an ideal solution would cause a failure to occur with the fewest total bytes allocated at the time of failure. With respect to the original post that inspired this, it could in theory be possible to cause a failure with zero bytes allocated if the memory allocator has an internal bug.

Any ideas/thoughts on how to do this?

like image 819
templatetypedef Avatar asked Mar 24 '11 09:03

templatetypedef


1 Answers

Depends what you mean by "sufficient memory available". For a simple fragmentation "attack":

  • Make a squillion small allocations until one fails[*].

  • Now, sort them in order of address[**].

  • Free 100 alternate allocations.

  • Attempt to allocate 100*small bytes.

Chances are the allocator will fail to find contiguous memory to satisfy that. If it has a small page size, and plenty of virtual address space compared with physical memory, then it might be able to rearrange things to do it - but that requires capabilities of the MMU on top of any anti-fragmentation strategy by the allocator.

If by "sufficient available memory" you mean a large block of memory that formerly was a contiguous block, has been split up into several allocations all of which have since been freed, and now the allocator treats it as separate blocks and so fails to allocate large bytes then no, I don't think you can force an arbitrary block-box allocator to fail to coalesce blocks. Some allocator or other might do much more work than Windows appears to be doing in that other question, to guarantee that adjacent free blocks are always coalesced.

[*] possible problem - over-committing memory allocators might not fail, you just get a segfault or your process is killed. On such systems you might need to track how much memory is available.

[**] possible problem - in C and C++, operator< isn't guaranteed to work. But on almost all systems it does, and in C++ there's std::less too.

like image 58
Steve Jessop Avatar answered Oct 28 '22 21:10

Steve Jessop