All operations on "standard" signed integer types in C (short, int, long, etc) exhibit undefined behaviour if they yield a result outside of the [TYPE_MIN, TYPE_MAX] interval (where TYPE_MIN, TYPE_MAX are the minimum and the maximum integer value respectively. that can be stored by the specific integer type.
According to the C99 standard, however, all intN_t
types are required to have a two's complement representation:
7.8.11.1 Exact-width integer types
1. The typedef name intN_t designates a signed integer type with width N , no padding bits, and a two’s complement representation. Thus, int8_t denotes a signed integer type with a width of exactly 8 bits.
Does this mean that intN_t
types in C99 exhibit well-defined behaviour in case of an integer overflow? For example, is this code well-defined?
#include <stdio.h>
#include <stdint.h>
#include <inttypes.h>
int main(void)
{
printf("Minimum 32-bit representable number: %" PRId32 "\n", INT32_MAX + 1);
return 0;
}
In contrast, the C standard says that signed integer overflow leads to undefined behavior where a program can do anything, including dumping core or overrunning a buffer. The misbehavior can even precede the overflow. Such an overflow can occur during addition, subtraction, multiplication, division, and left shift.
The stdint. h header defines integer types, limits of specified width integer types, limits of other integer types and macros for integer constant expressions.
No, it doesn't.
The requirement for a 2's-complement representation for values within the range of the type does not imply anything about the behavior on overflow.
The types in <stdint.h>
are simply typedefs (aliases) for existing types. Adding a typedef doesn't change a type's behavior.
Section 6.5 paragraph 5 of the C standard (both C99 and C11) still applies:
If an exceptional condition occurs during the evaluation of an expression (that is, if the result is not mathematically defined or not in the range of representable values for its type), the behavior is undefined.
This doesn't affect unsigned types because unsigned operations do not overflow; they're defined to yield the wrapped result, reduced modulo TYPE_MAX + 1. Except that unsigned types narrower than int
are promoted to (signed) int
, and can therefore run into the same problems. For example, this:
unsigned short x = USHRT_MAX;
unsigned short y = USHRT_MAX;
unsigned short z = x * y;
causes undefined behavior if short
is narrower than int
. (If short
and int
are 16 and 32 bits, respectively, then 65535 * 65535
yields 4294836225
, which exceeds INT_MAX
.)
Although storing an out-of-range value to a signed type stored in memory will generally store the bottom bits of the value, and reloading the value from memory will sign-extend it, many compilers' optimizations may assume that signed arithmetic won't overflow, and the effects of overflow may be unpredictable in many real scenarios. As a simple example, on a 16-bit DSP which uses its one 32-bit accumulator for return values (e.g. TMS3205X), int16_t foo(int16_t bar) { return bar+1;}
a compiler would be free to load bar
, sign-extended, into the accumulator, add one to it, and return. If the calling code were e.g. long z = foo(32767)
, the code might very well set z
to 32768 rather than -32768.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With