I have the following C++ code:
return lineNum >= startLineNum
&& lineNum <= startLineNum + lines.size() - 1;
Here, lineNum
is an int
, startLineNum
is an int
, lines
is a std::vector<std::string>
, and lines.size()
is of type size_t
.
When lineNum
is 2, startLineNum
is 0, and lines.size()
is 0, the code returns true
even though false
was expected. These values were the values displayed in the debugger.
Even after adding parentheses where possible:
return ((lineNum >= startLineNum)
&& (lineNum <= (startLineNum + lines.size() - 1)));
the code still incorrectly returns true
.
When I refactor the code into this form:
int start = startLineNum;
int end = startLineNum + lines.size() - 1;
return lineNum >= start && lineNum <= end;
it now returns false
as expected.
What is going on here? I have never come across this kind of strangeness before.
lines.size()
is more than likely an unsigned type. (If lines
is a std::vector
for example it's certainly unsigned
.)
So due to the rules of argument promotion, and the fact that the terms in
startLineNum + lines.size() - 1;
are grouped from left to right, they are all converted to unsigned
types.
This means that 0 + 0 - 1
is std::numeric_limits<decltype(lines.size())>::max()
- a large number indeed, and lineNum
is most likely less than it.
The rule of thumb: never use a negative when working with unsigned types, unless you really know what you're doing.
In your case, restate the problem to
lineNum < startLineNum + lines.size()
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With