I find myself needing to compute 16-bit unsigned integer divided by power of 2, which should result in a 32-bit float (standard IEEE format). This is on embedded system and the routine is repeatedly used so I am looking for something better than (float)x/(float)(1<<n)
. In addition, C compiler is pretty limited (no math lib, bit field, reinterpret_cast, etc).
If you don't mind some bit twiddling then the obvious way to go is to convert the integer to float and then subtract n from the exponent bits to achieve the division by 2^n:
y = (float)x; // convert to float
uint32_t yi = *(uint32_t *)&y); // get float value as bits
uint32_t exponent = yi & 0x7f800000; // extract exponent bits 30..23
exponent -= (n << 23); // subtract n from exponent
yi = yi & ~0x7f800000 | exponent; // insert modified exponent back into bits 30..23
y = *(float *)&yi; // copy bits back to float
Note that this fails for x = 0, so you should check x > 0 before conversion.
Total cost is one int-float conversion plus a handful of integer bitwise/arithmetic operations. If you use a union you can avoid having separate int/float representations and just work directly on the float.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With