I have a vector:
std::vector<int> vec = {1, 2, 3};
And I want to make a reverse for
loop. It works, when I write:
for(int i = vec.size() - 1; i >= 0; --i) {
std::cout << i << std::endl; // 2, 1, 0
}
But I get a very large number (like 18446744073709223794) if I write:
for(size_t i = vec.size() - 1; i >= 0; --i) {
std::cout << i << std::endl;
}
But they both work when I write:
for(int i = 0; i < vec.size() - 1; ++i) {
std::cout << i << std::endl; // 1, 2, 3
}
// Or
for(size_t i = 0; i < vec.size() - 1; ++i) {
std::cout << i << std::endl; // 1, 2, 3
}
Why do I get the wrong size of the vector when I use size_t
?
I think there is a problem with the conversion.
In C#, using Visual Studio 2005 or later, type 'forr' and hit [TAB] [TAB]. This will expand to a for loop that goes backwards through a collection.
If you compiled your program with warnings enabled, the compiler would tell you something like this:
<source>: In function 'int main()':
7:43: warning: comparison of unsigned expression in '>= 0' is always true [-Wtype-limits]
7 | for(std::size_t i = vec.size() - 1; i >= 0; --i) {
| ~~^~~~
Why is that? It's because std::size_t
is an unsigned type in C++; it only represents non-negative numbers. Read more about turning on warnings and why it's important: Why should I always enable compiler warnings?
I've decided to split my answer here off to a separate question, independent of OP's bug. Please go read it.
The problem is that size_t
is an unsigned integer, i.e. it can only have positive values. When you decrease 0 for an unsigned type an underflow happens and the result is usually the largest integer representable by that type, e.g. 18446744073709223794 in your case. Finally the check for i >= 0
is always true for any unsigned type and your loop will never terminate.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With