Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

vector memory allocation strategy

i wrote a little piece of code to determine, how memory allocating in a vector is done.

#include <iostream>
#include <vector>
using namespace std;
int main ()
{
  vector<unsigned int> myvector;
  unsigned int capacity = myvector.capacity();

  for(unsigned int i = 0; i <  100000; ++i) {
    myvector.push_back(i);
    if(capacity != myvector.capacity())
    {
      capacity = myvector.capacity();
      cout << myvector.capacity() << endl;
    }
  }
  return 0;
}

I compiled this using Visual Studio 2008 and g++ 4.5.2 on Ubuntu and got these results:

Visual Studio:

1 2 3 4 6 9 13 19 28 42 63 94 141 211 316 474 711 1066 1599 2398 3597 5395 8092 12138 18207 27310 40965 61447 92170 138255

capacity = capacity * 1.5;

g++:

1 2 4 8 16 32 64 128 256 512 1024 2048 4096 8192 16384 32768 65536 131072

capacity = capacity * 2;

As you can see, these are two very different results. Why is this like that? Is it only depending on the compiler or is it addicted to other factors?

Does it really make sense to keep on with doubling the capacity, even for large numbers of elements?

like image 790
m47h Avatar asked Jul 19 '12 12:07

m47h


1 Answers

The standard only defines a vector's behaviour. What really happens internally depends on the implementation. Doubling the capacity results in an amortized O(n) cost for pushing/popping n elements, which is required for a vector, I guess. Look here for more details.

like image 139
Ben Avatar answered Oct 01 '22 09:10

Ben