I can read that int range (signed) is from [−32767, +32767] but I can say, for example
int a=70000;
int b=71000;
int c=a+b;
printf("%i", c);
return 0;
And the output is 141000 (correct). Should not the debugger tell me "this operation is out of range" or something similar?
I suppose that this has to be with me ignoring the basics of C programming, but none of the books that I'm currently reading tell nothing about this "issue".
EDIT: 2147483647 seems to be the upper limit, thank you. If a sum exceeds that number, the result is negative, wich is expected, BUT if it is a subtraction, for example: 2147483649-2147483647=2 the result is still good. I mean, why the value 2147483649 is correctly hold for that substraction purpose (or at least it seems to me)?
The range [−32767, +32767] is the required minimum range. An implementation is allowed to provide a larger range.
All types are compiler-dependent. int
used to be the "native word" of the underlying hardware, which on 16-bit systems meant that int
was 16 bits (which leads to the -32k to +32k range). When 32-bit systems started coming then int
naturally followed along and became 32 bits, which can store values around -2 billion to +2 billion.
However this "native word" use for int
didn't follow along when 64-bit systems came around, I know of no 64-bit system or compiler that have int
being 64 bits.
See e.g. this reference of integer types for more information.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With