I am puzzled by the result of std::vector< char >::max_size()
on the n = 32 and n = 64 bits system I have tested. The result is 2n − 1. Let me explain why I am puzzled.
Every implementation of std::vector<T>
that I know of has three members of type T*
: begin_
, end_
, capacity_
.
begin_
points to the first value of the vector and end_
points to the one after the last.
Therefore, the size of the vector is given by end_ - begin_
. But the result of this difference is of type std::ptrdiff_t
which is a signed integer of n bits on every implementation that I know of.
Therefore, this type can not store 2n − 1, but only up to 2n − 1 − 1. If you look at your std::vector
implementation, you'll clearly see that size makes a difference of 2 pointers (before casting it to an unsigned integer).
So, how come they can pretend to store more than 2n − 1 elements without breaking .size()
?
It is obviously a bug in some standard library implementations. I have done more work on that subject and, using the following code
#include <iostream>
#include <climits>
#include <vector>
int main() {
auto v = std::vector<char>();
std::cout << "Maximum size of a std::vector<char>: " <<
v.max_size() << std::endl;
std::cout << "Maximum value a std::size_t can hold: " <<
SIZE_MAX << std::endl;
std::cout << "Maximum value a std::ptrdiff_t can hold: " <<
PTRDIFF_MAX << std::endl;
return 0;
}
one can easily show that:
Therefore, libstdc++ and the Microsoft implementation of the standard library have the bug but libc++ does not have it. I'll fill a bug report against those 2.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With