Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why are there differing definitions of INT64_MIN? And why do they behave differently?

Tags:

c

gcc

64-bit

min

The stdint.h header at my company reads:

#define INT64_MIN -9223372036854775808LL

But in some code in my project, a programmer wrote:

#undef INT64_MIN
#define INT64_MIN (-9223372036854775807LL -1)

He then uses this definition in the code.
The project compiles with no warnings/errors.
When I attempted to remove his definition and use the default one, I got:

error: integer constant is so large that it is unsigned

The two definitions appear to be equivalent.
Why does one compile fine and the other fails?

like image 216
abelenky Avatar asked Jun 29 '12 23:06

abelenky


1 Answers

-9223372036854775808LL is not a single literal. It's an expression consisting of a unary - operator applied to the constant 9223372036854775808LL.

That constant is (barely) outside the range of type long long, which causes the warning. (I'm assuming long long is 64 bits, which it almost certainly is.)

The expression (-9223372036854775807LL -1), on the other hand, contains literals that are within the range of long long, and is an equally a more valid definition for INT64_MIN, since it's of the correct type (as Steve Jessop points out in a comment).

like image 199
Keith Thompson Avatar answered Nov 02 '22 12:11

Keith Thompson