I'm reading this document: http://software.intel.com/en-us/articles/interactive-ray-tracing
and I stumbled upon these three lines of code:
The SIMD version is already quite a bit faster, but we can do better. Intel has added a fast 1/sqrt(x) function to the SSE2 instruction set. The only drawback is that its precision is limited. We need the precision, so we refine it using Newton-Rhapson:
__m128 nr = _mm_rsqrt_ps( x ); __m128 muls = _mm_mul_ps( _mm_mul_ps( x, nr ), nr ); result = _mm_mul_ps( _mm_mul_ps( half, nr ), _mm_sub_ps( three, muls ) );
This code assumes the existence of a __m128 variable named 'half' (four times 0.5f) and a variable 'three' (four times 3.0f).
I know how to use Newton Raphson to calculate a function's zero and I know how to use it to calculate the square root of a number but I just can't see how this code performs it.
Can someone explain it to me please?
The Newton-Raphson method (also known as Newton's method) is a way to quickly find a good approximation for the root of a real-valued function f ( x ) = 0 f(x) = 0 f(x)=0. It uses the idea that a continuous and differentiable function can be approximated by a straight line tangent to it.
Newton-Raphson method is extensively used for analysis of flow in water distribution networks. Several efficient computer programs, using Newton-Raphson method, are also available for analysis of flow in large size networks.
If you provide a guess that is sufficiently close to a simple root, Newton's method will converge quadratically to the nearby root. However, if your guess is near a critical point of the function, Newton's method will produce a "next guess" that is far away from the initial guess.
In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valued function.
Given the Newton iteration , it should be quite straight forward to see this in the source code.
__m128 nr = _mm_rsqrt_ps( x ); // The initial approximation y_0 __m128 muls = _mm_mul_ps( _mm_mul_ps( x, nr ), nr ); // muls = x*nr*nr == x(y_n)^2 result = _mm_mul_ps( _mm_sub_ps( three, muls ) // this is 3.0 - mul; /*multiplied by */ __mm_mul_ps(half,nr) // y_0 / 2 or y_0 * 0.5 );
And to be precise, this algorithm is for the inverse square root.
Note that this still doesn't give fully a fully accurate result. rsqrtps
with a NR iteration gives almost 23 bits of accuracy, vs. sqrtps
's 24 bits with correct rounding for the last bit.
The limited accuracy is an issue if you want to truncate the result to integer. (int)4.99999
is 4
. Also, watch out for the x == 0.0
case if using sqrt(x) ~= x * sqrt(x)
, because 0 * +Inf = NaN
.
To compute the inverse square root of a
, Newton's method is applied to the equation 0=f(x)=a-x^(-2)
with derivative f'(x)=2*x^(-3)
and thus the iteration step
N(x) = x - f(x)/f'(x) = x - (a*x^3-x)/2 = x/2 * (3 - a*x^2)
This division-free method has -- in contrast to the globally converging Heron's method -- a limited region of convergence, so you need an already good approximation of the inverse square root to get a better approximation.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With