Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is there a portable way to define INT_MAX?

Tags:

c

int

gcc

I found the following definitions in /usr/include/limits.h:

# define INT_MIN (-INT_MAX - 1)

# define INT_MAX 2147483647

Also, it seems that all XXX_MAX's in this header file are explicitly defined from a numerical constant.

I wonder if there is a portable way (against different word sizes across platforms) to define a INT_MAX ?

I tried the following:

~((int)-1)

But this seems incorrect.

A short explanation is also highly regarded.

like image 510
Patrick Pan Avatar asked Jul 23 '13 02:07

Patrick Pan


People also ask

Is INT_MAX defined?

INT_MAX is a macro which represents the maximum integer value. Similarly, INT_MIN represents the minimum integer value. These macros are defined in the header file <limits.

What does INT_MIN mean?

INT_MIN specifies that an integer variable cannot store any value below this limit. Values of INT_MAX and INT_MIN may vary from compiler to compiler. Following are typical values in a compiler where integers are stored using 32 bits. Value of INT_MAX is +2147483647. Value of INT_MIN is -2147483648.

What is INT_MAX +1?

INT_MAX is the maximum number an int can have, INT_MAX+1 in many cases produces that the values wraps to the minimum number an int can have ( which is negative ) The positive numbers (in your case) go from 0 to 2137483647, negatives numbers go from -1 to -2137483648.


2 Answers

For the INT_MAX in the standard header limits.h, the implementor's hands are tied by the fact that it's required to be usable in preprocessor #if directives. This rules out anything involving sizeof or casts.

If you just want a version that works in actual C expressions, perhaps this would work:

(int)-1U/2 == (int)(-1U/2) ? (int)-1U : (int)(-1U/2)

The concept here is that int may have the same number of value bits as unsigned, or one fewer value bit; the C standard allows either. In order to test which it is, check the result of the conversion (int)-1U. If -1U fits in int, its value must be unchanged by the cast, so the equality will be true. If -1U does not fit in int, then the cast results in an implementation-defined result of type int. No matter what the value is, though, the equality will be false merely by the range of possible values.

Note that, technically, the conversion to int could result in an implementation-defined signal being raised, rather than an implementation-defined value being obtained, but this is not going to happen when you're dealing with a constant expression which will be evaluated at compile-time.

like image 164
R.. GitHub STOP HELPING ICE Avatar answered Nov 01 '22 06:11

R.. GitHub STOP HELPING ICE


I like the defintions:

#define INT_MIN (1 << (sizeof(int)*CHAR_BIT-1))
#define INT_MAX (-(INT_MIN+1))
like image 37
abelenky Avatar answered Nov 01 '22 08:11

abelenky