Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Bulk memory free of fragmented stl containers

Tags:

c++

Currently, when we want to destruct a very large nested list/map of complex objects with very fragmented memory allocations, I assume C++ is to invoke destructors and free the memory one by one and recursively which takes lots of time and is inefficient?

In my case, I find it sometimes takes 1 min or more to destruct a 300GB object.

The operating system can kill a process taking lots of memory efficiently because it just free all the memory without considering much about the logic inside process.

I am wondering if there is any existing library for C / C++ can do just that? To provide a customized memory allocator maintaining an id system? Such that if I specify an id to create an allocator for a given large STL container (and its elements). When I want to destruct it, I can free all the memory allocated with a specified id, and just discard the pointer to the outer container (and it will skip all the destructors) ? Just like we can "kill" a pid...

Thanks!

like image 759
abab Avatar asked Feb 02 '19 03:02

abab


1 Answers

This can be done through a pool allocator and placement new, of course you will have some limits, like finding a common size for your slot in the pool (if you don't want fine granularity) but in general a simple case scenario as the following:

struct Foo {
  double x, y;
  Foo(double x, double y) { this->x = x; this->y = y; };
};

std::byte* buffer = new std::byte[sizeof(Foo) * 10];

Foo* foo1 = new(buffer) Foo(1.0, 2.0);
Foo* foo2 = new(buffer + sizeof(Foo)) Foo(1.0, 2.0);

delete[] buffer;

explains the basic principle. This must be done with precautions though, since no one is calling your destructor (and you should do it manually through foo1->~Foo()). But if the destructor has no side effects or you can take care of them at once then you are allowed by the standard not to explicitly call it.

Now the tricky part is the fact that if you are using STL collection then they internally do a lot of allocations to store their needs (especially containers like std::map or std::list). So you'd need to write a custom allocator<T> which wraps an efficient pooling scheme.

like image 114
Jack Avatar answered Nov 19 '22 23:11

Jack