Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why is deallocation slow?

I have a question for which I am unable to find answer on the net...

I have a set declared like this:

set<unsigned int> MySet

I am inserting a million random numbers generated with mersenne twister. Random generation and insertion are really fast (around a second for a million numbers), but deallocation is extremely slow (1 and a half minute).

Why is deallocation so slow? I am not using any custom destructors for the set.

like image 468
Sibin Grasic Avatar asked Nov 25 '11 04:11

Sibin Grasic


People also ask

Why is deallocation of memory important?

This creates the need for deallocation of finished processes to free the memory for new processes. So, basically, deallocation of memory by the operating system (OS) is a way to free the random access memory (RAM) of finished processes and allocate new ones.

What happens if you dont deallocate memory?

If you lose all pointers to a chunk of memory without deallocating that memory then you have a memory leak. Your program will continue to own that memory, but has no way of ever using it again. A very small memory leak is not a problem.

What is the use of deallocate?

deallocate() This function deallocates the specified amount of memory. This memory must have been previously allocated by the allocate() function.


2 Answers

Compile your code in release mode.

This does two things.

  1. It turns on the optimizations which definitely help.
  2. Also the memory management libraries are different for debug and release.
    The debug version of the library are built to allow for debugging and they maintain extra information (like marking de-allocated memory). All this extra processing does actually cost
    • The objective of the two version of the library are completely different. The release version is definitely optimized for speed the debug version is optimized for recovery and debugging.

Note this information is about DevStudio.

like image 59
Martin York Avatar answered Sep 27 '22 21:09

Martin York


Probably because it makes more sense to optimize allocation at the expense of deallocation, because many applications allocate without deallocating, but never vice versa. I've seen a similar pattern myself, in an application that mixed calls to malloc and free (as opposed to allocating and deallocating all at once).

I've never written a heap allocator, so I don't know if there's a deeper technical reason than that. When deallocating, adjacent free blocks must be found and coalesced. So the work is just fundamentally different.

90 seconds for 1 million small free()'s sounds quite slow. I've never really programmed Windows so I can't say if that's abnormal, but the system should be able to do much better.

The solution to your problem may be simply to skip freeing the objects before the program exits. You might try deriving a custom allocator from std::allocator< unsigned int > which makes deallocate a no-op.

like image 45
Potatoswatter Avatar answered Sep 27 '22 22:09

Potatoswatter