I'm working on a project where I need to be as mean as possible regarding the memory usage. I'm trying to calculate the grand total size of a vector<bool> of size 32 in the example:
vector<bool> v(32);
cout << "sizeof: " << sizeof(v) << endl;
cout << "size: " << v.size() << endl;
cout << "capacity: " << v.capacity() << endl;
cout << "max_size: " << v.max_size() << endl;
which gives me:
sizeof: 40 <- 40 byte? wtf?
size: 32 <- hoping an element takes up 1 bit
(instead of the usual 1 byte for booleans)
this should take around 32 bit memory
capacity: 64 <- I guess this is because minimum size is
1 integer = 64 bit
max_size: 9223372036854775744 <- relevant somehow?
on my 64 bit ubuntu 12.04 machine. So I thought I could calculate the memory like so:
40 * 8 + 64 * 1 = 384 bit = 48 byte
So according to this calculation most of the memory is spent for the vector object of size 32. My question is why does a vector object need to use so much memory? And also are there any mistakes in my calculation? How can I be more efficient without doing bitwise manipulations myself for vector sizes around 32?
Those 40 bytes are administrative overhead. Among other things, a vector has to keep track of its size and capacity (that's two size_ts worth of bytes gone already), and very importantly, a pointer to the actual data!
The actual data kept by the vector is allocated on the heap by the default vector allocator, its memory consumption is not included in the result of sizeof.
sizeof(v) is getting the size of v's structure, not the size of v's data. It's like doing:
struct S { int* x };
S s;
s.x = new int[10000];
sizeof(s); // only the size of int* (it doesn't/can't check how much data is allocated)
As for why std::vector<bool> might have a larger structure size than, say, std::vector<int>, remember that the bool version is specialized. It's got to have some extra members for bitwise record keeping.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With