Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Signed & unsigned integer multiplication

In fixed point math I use a lot of 16bit signals and perform multiplication with 32bit intermediate results. For example:

int16_t a = 16384; //-1.0q14  or 1.0*2^14
int16_t b = -24576; // -1.4q14  or 1.4*2^14
int16_t c; // result will be q14

c = (int16_t)(((int32_t)a * (int32_t)b)>>14);

Lets say a is a q14 number then c with have the same scaling as b.

This is fine and works for unsigned as well as signed arithmetic.

The question is: What happens if I were to mix types? For example if I know the multiplier "a" is always going to range from 0.0 to 1.0, it is tempting to make it an unsigned int q15 to get the extra bit of precision (and change the shift count to 15). However, I never understood what happens if you try to multiply signed and unsigned numbers in C and have avoided it. In ASM I don't recall there being a multiply instruction that would work with mixed types on any architecture, so even if C does the right thing I'm not sure it would generate efficient code.

Should I continue my practice of not mixing signed an unsigned types in fixed point code? Or can this work nicely?

like image 328
phkahler Avatar asked Jun 06 '13 15:06

phkahler


People also ask

What do you mean by signed?

to write your name, usually on a written or printed document, to show that you agree with its contents or have written or created it yourself: to sign a letter/cheque/contract/lease/agreement.

Does signed mean signature?

Putting your signature or mark on a document or instrument means you're accepting, approving, or obligating to what's in the document. A signature often means someone signing a written document with their own hand. However, it's not always necessary for it to be written by hand for it to be legal.

What type of word is signed?

Adjective. signed (not comparable) Having a signature; endorsed. The signed check could be cashed.


1 Answers

This post Talks about what happens when multiplying signed and unsigned integers. Short answer is, as long as they are the same rank (size), a signed is implicitly typecast to unsigned.

As long as you understand the typecasting rules (of whatever language you are programming in), or use explicit typecasting, and you also understand the implications of typecasting from signed to unsigned (a negative number will produce what may appear as gibberish when typecasted to a signed value), then there should be no issue mixing signed and unsigned types.

like image 142
Benjamin Leinweber Avatar answered Sep 30 '22 08:09

Benjamin Leinweber