I am working on a fixed-point platform (floating-point arithmetic not supported).
I represent any rational number q
as the floor value of q * (1 << precision)
.
I need an efficient method for calculating log base 2 of x
, where 1 < x < 2
.
Here is what I've done so far:
uint64_t Log2(uint64_t x, uint8_t precision)
{
uint64 res = 0;
uint64 one = (uint64_t)1 << precision;
uint64 two = (uint64_t)2 << precision;
for (uint8_t i = precision; i > 0 ; i--)
{
x = (x * x) / one; // now 1 < x < 4
if (x >= two)
{
x >>= 1; // now 1 < x < 2
res += (uint64_t)1 << (i - 1);
}
}
return res;
}
This works well, however, it takes a toll on the overall performance of my program, which requires executing this for a large amount of input values.
For all it matters, the precision
used is 31, but this may change so I need to keep it as a variable.
Are there any optimizations that I can apply here?
I was thinking of something in the form of "multiply first, sum up last".
But that would imply calculating x ^ (2 ^ precision)
, which would very quickly overflow.
Thank you.
UPDATE:
I have previously tried to get rid of the branch, but it just made things worse:
for (uint8_t i = precision; i > 0 ; i--)
{
x = (x * x) / one; // now 1 < x < 4
uint64_t n = x / two;
x >>= n; // now 1 < x < 2
res += n << (i - 1);
}
return res;
The only things I can think of is to do the loop with a right-shift instead of a decrement and change a few operations to their equivalent binary ops. That may or may not be relevant to your platform, but in my x64 PC they yield an improvement of about 2%:
uint64_t Log2(uint64_t x, uint8_t precision)
{
uint64_t res = 0;
uint64_t two = (uint64_t)2 << precision;
for (uint64_t b = (uint64_t)1 << (precision - 1); b; b >>= 1)
{
x = (x * x) >> precision; // now 1 < x < 4
if (x & two)
{
x >>= 1; // now 1 < x < 2
res |= b;
}
}
return res;
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With