I found the following definitions in /usr/include/limits.h:
# define INT_MIN (-INT_MAX - 1)
# define INT_MAX 2147483647
Also, it seems that all XXX_MAX's in this header file are explicitly defined from a numerical constant.
I wonder if there is a portable way (against different word sizes across platforms) to define a INT_MAX ?
I tried the following:
~((int)-1)
But this seems incorrect.
A short explanation is also highly regarded.
INT_MAX is a macro which represents the maximum integer value. Similarly, INT_MIN represents the minimum integer value. These macros are defined in the header file <limits.
INT_MIN specifies that an integer variable cannot store any value below this limit. Values of INT_MAX and INT_MIN may vary from compiler to compiler. Following are typical values in a compiler where integers are stored using 32 bits. Value of INT_MAX is +2147483647. Value of INT_MIN is -2147483648.
INT_MAX is the maximum number an int can have, INT_MAX+1 in many cases produces that the values wraps to the minimum number an int can have ( which is negative ) The positive numbers (in your case) go from 0 to 2137483647, negatives numbers go from -1 to -2137483648.
For the INT_MAX
in the standard header limits.h
, the implementor's hands are tied by the fact that it's required to be usable in preprocessor #if
directives. This rules out anything involving sizeof
or casts.
If you just want a version that works in actual C expressions, perhaps this would work:
(int)-1U/2 == (int)(-1U/2) ? (int)-1U : (int)(-1U/2)
The concept here is that int
may have the same number of value bits as unsigned
, or one fewer value bit; the C standard allows either. In order to test which it is, check the result of the conversion (int)-1U
. If -1U
fits in int
, its value must be unchanged by the cast, so the equality will be true. If -1U
does not fit in int
, then the cast results in an implementation-defined result of type int
. No matter what the value is, though, the equality will be false merely by the range of possible values.
Note that, technically, the conversion to int
could result in an implementation-defined signal being raised, rather than an implementation-defined value being obtained, but this is not going to happen when you're dealing with a constant expression which will be evaluated at compile-time.
I like the defintions:
#define INT_MIN (1 << (sizeof(int)*CHAR_BIT-1))
#define INT_MAX (-(INT_MIN+1))
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With