I just starting to learning vectors and little confused about size()
and capacity()
I know little about both of them. But why in this program both are different? even array(10)
is making room for 10 elements and initializing with 0.
Before adding array.push_back(5)
So array.size();
is 10 that is ok.
So array.capacity();
is 10 that is ok.
After adding array.push_back(5)
So array.size();
is 11 that is ok (already 10 time 0 is added and then push_back add one more element 5 )
.
So array.capacity();
is 15 Why? ( is it reserving 5 blocks for one int? )
.
#include <iostream>
#include <vector>
int main(){
std::vector<int> array(10); // make room for 10 elements and initialize with 0
array.reserve(10); // make room for 10 elements
array.push_back(5);
std::cout << array.size() << std::endl;
std::cout << array.capacity() << std::endl;
return 0;
}
The std::vector::capacity is not its actual size (which is returned by size()
), but the size of the actual internal allocated size.
In other terms, it is the size that it can reach before another re-allocation is needed.
It doesn't increase by 1 each time you do a push_back
in order to not call a new reallocation (which is a heavy call) on each inserted element. It reserves more, because it doesn't know if you won't do some other push_back
just after, and in this case, it won't have to change allocated memory size for the 4 next elements.
Here, 4 next elements is a compromise between 1, that would optimize memory allocation at maximum but would risk another reallocation soon, and a huge number, that would allow you to make many push_back
quickly but maybe reserve a lot of memory for nothing.
Note: if you want to specify a capacity yourself (if you know your vector maximum size for instance), you can do it with reserve member function.
The Standard mandates that std::vector<T>::push_back()
has amortized O(1)
complexity. This means that the expansion has to be geometrically, say doubling the amount of storage each time it has been filled.
Simple example: sequentially push_back
32 int
s into a std::vector<int>
. You will store all of them once, and also do 31 copies if you double the capacity each time it runs out. Why 31? Before storing the 2nd element, you copy the 1st; before storing the 3rd, you copy elements 1-2, before storing the 5th, you copy 1-4, etc. So you copy 1 + 2 + 4 + 8 + 16 = 31 times, with 32 stores.
Doing the formal analysis shows that you get O(N)
stores and copies for N
elements. This means amortized O(1)
complexity per push_back
(often only a store without a copy, sometimes a store and a sequence of copies).
Because of this expansion strategy, you will have size() < capacity()
most of the time. Lookup shrink_to_fit
and reserve
to learn how to control a vector's capacity in a more fine-grained manner.
Note: with geometrical growth rate, any factor larger than 1 will do, and there have been some studies claiming that 1.5 gives better performance because of less wasted memory (because at some point the reallocated memory can overwrite the old memory).
It is for efficiency so that it does not have to expand the underlying data structure each time you add an element. i.e. not having to call delete
/new
each time.
Using
std::vector<int> array(10); // make room for 10 elements and initialize with 0
You actually filled all the ten spaces with zeros. Adding ad additional element will cause the capacity to be expanded thanks for efficiency. In your case it is useless to call the function reserve because you have instantiated the same number of elements.
check this and this link
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With