In another question, the topic of std::numeric_limits<int>::is_modulo
came up. But the more I think about it, the more it seems like something is wrong with the spec or with GCC or both.
Let me start with some code:
#include <limits>
#include <iostream>
bool test(int x)
{
return x+1 > x;
}
int main(int argc, char *argv[])
{
int big = std::numeric_limits<int>::max();
std::cout << std::numeric_limits<int>::is_modulo << " ";
std::cout << big+1 << " ";
std::cout << test(big) << "\n";
}
When I compile this with g++ -O3 -std=c++11
(x86_64 GCC 4.7.2), it produces the following output:
1 -2147483648 1
That is, is_modulo
is true, one plus INT_MAX
is negative, and one plus INT_MAX
is greater than INT_MAX
.
If you are the sort of person with any realistic chance of answering this question, you already know what happened here. The C++ specification says that integer overflow is Undefined Behavior; the compiler is allowed to assume you do not do that; therefore the argument to x+1
cannot be INT_MAX
; therefore the compiler may (and will) compile the test
function to return true
unconditionally. So far, so good.
However, the C++11 spec also says (18.3.2.4 paragraphs 60-61):
static constexpr is_modulo;
True if the type is modulo.222 A type is modulo if, for any operation involving
+
,-
, or*
on values of that type whose result would fall outside the range[min(),max()]
, the value returned differs from the true value by an integer multiple ofmax() - min() + 1
.On most machines, this is
false
for floating types,true
for unsigned integers, andtrue
for signed integers.
Note that section 5 paragraph (4) still reads, "If during the evaluation of an expression, the result is not mathematically defined or not in the range of representable values for its type, the behavior is undefined." There is no mention of is_modulo == true
creating an exception.
So it looks to me like the standard is logically contradictory, because integer overflow cannot simultaneously be defined and undefined. Or at the very least, GCC is non-conforming, because it has is_modulo
as true
even though signed arithmetic most certainly does not wrap around.
Is the standard buggy? Is GCC non-conforming? Am I missing something?
If is_modulo
is true
for a signed type (e.g. int
) that is unchanged by the usual arithmetic conversions, then for any arithmetic operation other than division by zero there is a single correct result in the (mathematical) integers that modulo maps to a single value in the range of the type, and so the implementation is constrained to behave as if the actual result is the true result modulo the range of the type. So if an implementation wants to retain overflow arithmetic as being undefined it must set is_modulo
to false
.
This was discussed ad nauseam on the gcc mailing list and then under PR 22200 with the eventual conclusion that the value of is_modulo
should be false
for signed types; the change was made to libstdc++ in April of this year.
Note that in C++03 the language was significantly different:
18.2.1.2 numeric_limits members [lib.numeric.limits.members]56 - [...] A type is modulo if it is possible to add two positive numbers and have a result that wraps around to a third number that is less.
Given that with undefined behaviour anything is possible, it is arguable that the previous behaviour of libstdc++ (having is_modulo
as true
) was correct and consistent with the behaviour of g++; the earlier discussions on the linked PR should be read with this in mind.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With