Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is the Carmack/Welsh inverse square root algorithm biased

Tags:

c

algorithm

math

When implementing "Carmack's Inverse Square Root" algorithm I noticed that the results seem biased. The following code seems to give better results:

float InvSqrtF(float x)
{
    // Initial approximation by Greg Walsh.
    int i  = * ( int* ) &x;
    i  = 0x5f3759df - ( i >> 1 );
    float y  = * ( float * ) &i;
    // Two iterations of Newton-Raphson's method to refine the initial estimate.
    x *= 0.5f;
    float f = 1.5F;
    y  = y * ( f - ( x * y * y ) );
    y  = y * ( f - ( x * y * y ) );
    * ( int * )(&y) += 0x13; // More magic.
    return y;
}

The key difference is in the penultimate "more magic" line. Since the initial results were too low by a fairly constant factor, this adds 19 * 2^(exponent(y)-bias) to the result with just a single instruction. It seems to give me about 3 extra bits, but am I overlooking something?

like image 867
MSalters Avatar asked Feb 07 '13 10:02

MSalters


2 Answers

Newton's method produces a bias. The function whose zero is to be found,

f(y) = x - 1/y²

is concave, so - unless you start with an y ≥ √(3/x) - the Newton method only produces approximations ≤ 1/√x (and strictly smaller, unless you start with the exact result) with exact arithmetic.

Floating point arithmetic occasionally produces too large approximations, but typically not in the first two iterations (since the initial guess usually isn't close enough).

So yes, there is a bias, and adding a small quantity generally improves the result. But not always. In the region around 1.25 or 0.85 for example, the results without the adjustment are better than with. In other regions, the adjustment yields one bit of additional precision, in yet others more.

In any case, the magic constant to add should be adjusted to the region from which x is most often taken for the best results.

like image 68
Daniel Fischer Avatar answered Nov 07 '22 01:11

Daniel Fischer


As this method is an approximation, the result will be overestimated some times and underestimated some others. You can find on McEniry's paper some nice figures about how this error is distributed for different configurations, and the math behind them.

So, unless you have solid proofs that in your domain of application the result is clearly biased, I would prefer tuning the magic constant as suggested in Lomont's document :-)

like image 22
dunadar Avatar answered Nov 06 '22 23:11

dunadar