I'm trying to figure out if the C Standard (C90, though I'm working off Derek Jones' annotated C99 book) guarantees that I will not lose precision multiplying two unsigned 8-bit values and storing to a 16-bit result. An example statement is as follows:
unsigned char foo;
unsigned int foo_u16 = foo * 10;
Our Keil 8051 compiler (v7.50 at present) will generate a MUL AB instruction which stores the MSB in the B register and the LSB in the accumulator. If I cast foo to a unsigned int first:
unsigned int foo_u16 = (unsigned int)foo * 10;
then the compiler correctly decides I want a unsigned int there and generates an expensive call to a 16x16 bit integer multiply routine. I would like to argue beyond reasonable doubt that this defensive measure is not necessary. As I read the integer promotions described in 6.3.1.1, the effect of the first line shall be as if foo and 10 were promoted to unsigned int, the multiplication performed, and the result stored as unsigned int in foo_u16. If the compiler knows an instruction that does 8x8->16 bit multiplications without loss of precision, so much the better; but the precision is guaranteed. Am I reading this correctly?
Best regards, Craig Blome
The promotion is guaranteed, but the promotion is made to signed int
type if the range of unsigned char
fits into the range of signed int
. So (assuming it fits) from the language point of view your
unsigned int foo_u16 = foo * 10;
is equivalent to
unsigned int foo_u16 = (signed) foo * 10;
while what you apparently want is
unsigned int foo_u16 = (unsigned) foo * 10;
The result of the multiplication can be different if it (the result) doesn't fit into the signed int
range.
If your compiler interprets it differently, it could be a bug in the compiler (again, under the assumption that range of unsigned char
fits into the range of signed int
).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With