I am working on a 16-bit processor, so most of my data is in 16-bit except where necessary.
If I have two 16-bit variables a
and b
and sum them together into a 32-bit variable, what will the compiler do?
uint16_t x, y;
uint32_t z;
x = 65504;
y = 65503;
z = x + y;
Will the result in z be identical to z = (uint32_t)x + (uint32_t)y
, or do I need to cast the result?
I have tried this on my compiler and the casts don't seem to make any difference, but this might just be a compiler oddity for this little embedded processor.
In C99 onwards,* operands to arithmetic operands are implicitly promoted to be at least as big as int
, as part of the usual arithmetic conversions. So the behaviour of your code depends on the native size of int
on your platform.
If your int
is 32-bit, then your code is equivalent to:
z = (int)x + (int)y;
If your int
is 16-bit, then no conversion will occur, and you would get incorrect results due to integer overflow.
* Prior to C99, the promotion rules were less well-defined (although I forget the details).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With