I have this piece of code:
#include<iostream>
#include<vector>
class A
{
private:
static int x;
public:
A(){}
~A()
{
++x;
std::cout << "destroying A " << x << std::endl;
}
};
int A::x(0);
int main (int args, char** argv)
{
std::vector<A> vectA(5);
}
and when I run it I would expect it to print 5 lines (i.e. the destructor is called for each of the 5 elements in the vector) but actually the output is:
destroying A 1
destroying A 2
destroying A 3
destroying A 4
destroying A 5
destroying A 6
mmm strange...
so I change the main function to:
int main (int args, char** argv)
{
std::vector<A> vectA(5);
std::cout << vectA.capacity() << std::endl;
}
and now the output is:
destroying A 1
5
destroying A 2
destroying A 3
destroying A 4
destroying A 5
destroying A 6
Ok so I guess when I first create vectA
it gets an allocated memory just the size of one object of type A, then it is dynamically resized (as a vector is meant to be) to contain 5 elements (and in this process the previously allocated memory gets freed, and the destructor gets called).
So my question is: why doesn't vectA
get the right amount memory from the beginning? After all, the value (5) is already known at compile time. Is there any specific reason for the compiler not to perform this optimization?
Prior to C++11, that code uses this constructor, which makes count
copies of value
:
explicit vector(size_type count, const T& value = T(), const Allocator& alloc = Allocator());
Once C++11 came around, it got changed to a constructor taking just a size and making count
value-initialized elements:
explicit vector(size_type count);
Therefore, before C++11, you get the created value
parameter, which, when combined with the five elements, makes six total. After C++11, it's just the five elements.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With