In the code below, many vectors with each 10 ints gets constructed with 60% chance or an existing vector gets deleted with 40% chance. Thus, there will be many calls to new/malloc and delete.
Since all these vectors are of type vector<int>
, can a custom allocator help here to reduce calls to new
and delete
and thus increase performance? The idea is that the space of a deleted vector can be reused by a newly constructed one. How would such a allocator look like?
Note: This question is about allocators, that reduces calls to new
and delete
.
#include <iostream>
#include <vector>
#include <random>
using namespace std;
int main()
{
// Random generator and distribution
mt19937 gen(123456);
uniform_real_distribution<> dis01(0., 1.);
// Make or delete 10E6 vectors.
vector< vector<int> > v; //the inner vectors will make many calls to new and delete
v.reserve(10E5); //assume some size.
for(int i=0; i<10E6; ++i)
{
if(dis01(gen)<0.6) // if true: make new sub-vector
{
v.emplace_back(); //new sub-vector
v.back().reserve(10);
for(int k=0; k<10; ++k)
v.back().emplace_back(k); //fill it with some numbers
}
else // else, delete the last entry if there is one.
if(!v.empty())
v.pop_back();
}
cout<<"v.size()= "<<v.size();
return 0;
}
std::allocator is the default memory allocator for the standard library containers, and you can substitute your own allocators. This allows you to control how the standard containers allocate memory.
Vector is better for frequent insertion and deletion, whereas Arrays are much better suited for frequent access of elements scenario. Vector occupies much more memory in exchange for managing storage and growing dynamically, whereas Arrays are a memory-efficient data structure.
A std::vector can never be faster than an array, as it has (a pointer to the first element of) an array as one of its data members. But the difference in run-time speed is slim and absent in any non-trivial program. One reason for this myth to persist, are examples that compare raw arrays with mis-used std::vectors.
vector is as fast as an array, at least if you reserve space sensibly. ...
You might be able to gain some performance by writing an allocator to more efficiently reuse recently freed memory, especially if all the vectors are going to be size 10. Of course, if that's the case, you'll gain more performance by using an object of fixed size. If the size of allocation for the vectors needs to be dynamic, then your problem is as abstract as general memory allocation, and you're not likely to improve on the standard allocator.
You are not at all likely to improve performance vs. STL unless you're able to leverage information which applies to your specific case but not the more general case.
A much better solution would be not to delete the vector objects, but just leave them in the vector>, maintain an iterator/pointer to the "end" of the vector (decremented instead of deleting), and then rather than emplacing an element (constructing a vector) you just advance your iterator, test for .end()
, and then emplace if needed, otherwise reuse the old vector. This assumes that your program does not rely on side affects of constructor or destructor (vector doesn't, but you're not telling us your actual use case).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With