The following C function is from fastapprox project.
static inline float
fasterlog2 (float x)
{
union { float f; uint32_t i; } vx = { x };
float y = vx.i;
y *= 1.1920928955078125e-7f;
return y - 126.94269504f;
}
Could some experts here explain why the exponent bias used in the above code is 126.94269504 instead of 127? Is it more accurate bias value?
In the project you linked, they included a Mathematica notebook with an explanation of their algorithms, which includes the "mysterious" -126.94269
value.
If you need a viewer, you can get one from the Mathematica website for free.
Edit: Since I'm feeling generous, here's the relevant section in screenshot form.
Simply put, they explain that the value is "simpler, faster, and less accurate".
They're not using -126.94269
in place of -127
, they're using it in place of the result of the following calculation (values rounded for brevity):
-124.2255 - 1.498 * mx - (1.72588 / (0.35201 + mx))
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With