Recently I was profiling a program in which the hotspot is definitely this
double d = somevalue();
double d2=d*d;
double c = 1.0/d2 // HOT SPOT
The value d2 is not used after because I only need value c. Some time ago I've read about the Carmack method of fast inverse square root, this is obviously not the case but I'm wondering if a similar algorithms can help me computing 1/x^2.
I need quite accurate precision, I've checked that my program doesn't give correct results with gcc -ffast-math option. (g++-4.5)
As shown below, SSE_InvSqrt function is the fastest algorithm to compute 1 / sqrt(x) with a reasonable precision. However, the standard sqrt function can provide more or less the same performance in a SISD architecture, but definitely with a better portability and maintainability.
A single Newton-Raphson iteration is performed to calculate a more accurate approximation of the inverse square root of the input. The result of the Newton-Raphson iteration is the return value of the function. The result is extremely accurate with a maximum error of 0.175%.
The tricks for doing fast square roots and the like get their performance by sacrificing precision. (Well, most of them.)
Are you sure you need double
precision? You can sacrifice precision easily enough:
double d = somevalue();
float c = 1.0f / ((float) d * (float) d);
The 1.0f
is absolutely mandatory in this case, if you use 1.0
instead you will get double
precision.
Have you tried enabling "sloppy" math on your compiler? On GCC you can use -ffast-math
, there are similar options for other compilers. The sloppy math may be more than good enough for your application. (Edit: I did not see any difference in the resulting assembly.)
If you are using GCC, have you considered using -mrecip
? There is a "reciprocal estimate" function which only has about 12 bits of precision, but it is much faster. You can use the Newton-Raphson method to increase the precision of the result. The -mrecip
option will cause the compiler to automatically generate the reciprocal estimate and Newton-Raphson steps for you, although you can always write the assembly yourself if you want to fine tune the performance-precision trade-off. (Newton-Raphson converges very quickly.) (Edit: I was unable to get GCC to generate RCPSS. See below.)
I found a blog post (source) discussing the exact problem you are going through, and the author's conclusion is that the techniques like the Carmack method are not competitive with the RCPSS instruction (which the -mrecip
flag on GCC uses).
The reason why division can be so slow is because processors generally only have one division unit and it's often not pipelined. So, you can have a few multiplications in the pipe all executing simultaneously, but no division can be issued until the previous division finishes.
Carmack's method: It is obsolete on modern processors, which have reciprocal estimation opcodes. For reciprocals, the best version I've seen only gives one bit of precision -- nothing compared to the 12 bits of RCPSS
. I think it is a coincidence that the trick works so well for reciprocal square roots; a coincidence that is unlikely to be repeated.
Relabeling variables. As far as the compiler is concerned, there is very little difference between 1.0/(x*x)
and double x2 = x*x; 1.0/x2
. I would be surprised if you found a compiler that generates different code for the two versions with optimizations turned on even to the lowest level.
Using pow
. The pow
library function is a total monster. With GCC's -ffast-math
turned off, the library call is fairly expensive. With GCC's -ffast-math
turned on, you get the exact same assembly code for pow(x, -2)
as you do for 1.0/(x*x)
, so there is no benefit.
Here is an example of a Newton-Raphson approximation for the inverse square of a double-precision floating-point value.
static double invsq(double x)
{
double y;
int i;
__asm__ (
"cvtpd2ps %1, %0\n\t"
"rcpss %0, %0\n\t"
"cvtps2pd %0, %0"
: "=x"(y)
: "x"(x));
for (i = 0; i < RECIP_ITER; ++i)
y *= 2 - x * y;
return y * y;
}
Unfortunately, with RECIP_ITER=1
benchmarks on my computer put it slightly slower (~5%) than the simple version 1.0/(x*x)
. It's faster (2x as fast) with zero iterations, but then you only get 12 bits of precision. I don't know if 12 bits is enough for you.
I think one of the problems here is that this is too small of a micro-optimization; at this scale the compiler writers are on nearly equal footing with the assembly hackers. Maybe if we had the bigger picture we could see a way to make it faster.
For example, you said that -ffast-math
caused an undesirable loss of precision; this may indicate a numerical stability problem in the algorithm you are using. With the right choice of algorithm, many problems can be solved with float
instead of double
. (Of course, you may just need more than 24 bits. I don't know.)
I suspect the RCPSS
method shines if you want to compute several of these in parallel.
For your current program you have identified the hotspot - good. As an alternative to speeding up 1/d^2, you have the option of changing the program so that it does not compute 1/d^2 so often. Can you hoist it out of an inner loop? For how many different values of d do you compute 1/d^2? Could you pre-compute all the values you need and then look up the results? This is a bit cumbersome for 1/d^2, but if 1/d^2 is part of some larger chunk of code, it might be worthwhile applying this trick to that. You say that if you lower the precision, you don't get good enough answers. Is there any way you can rephrase the code, that might provide better behaviour? Numerical analysis is subtle enough that it might be worth trying a few things and seeing what happened.
Ideally, of course, you would find some optimised routine that draws on years of research - is there anything in lapack or linpack that you could link to?
Yes, you can certainly try and work something out. Let me just give you some general ideas, you can fill in the details.
First, let's see why Carmack's root works:
We write x = M × 2E in the usual way. Now recall that the IEEE float stores the exponent offset by a bias: If e denoted the exponent field, we have e = Bias + E ≥ 0. Rearranging, we get E = e − Bias.
Now for the inverse square root: x−1/2 = M-1/2 × 2−E/2. The new exponent field is:
e' = Bias − E/2 = 3/2 Bias − e/2
With bit fiddling, we can get the value e/2 from e by shifting, and 3/2 Bias is just a constant.
Moreover, the mantissa M is stored as 1.0 + x with x < 1, and we can approximate M-1/2 as 1 + x/2. Again, the fact that only x is stored in binary means that we get the division by two by simple bit shifting.
Now we look at x−2: this is equal to M−2 × 2−2 E, and we are looking for an exponent field:
e' = Bias − 2 E = 3 Bias − 2 e
Again, 3 Bias is just a constant, and you can get 2 e from e by bitshifting. As for the mantissa, you can approximate (1 + x)−2 by 1 − 2 x, and so the problem reduces to obtaining 2 x from x.
Note that Carmack's magic floating point fiddling doesn't actually compute the result right aaway: Rather, it produces a remarkably accurate estimate, which is used as the starting point for a traditional, iterative computation. But because the estimate is so good, you only need very few rounds of subsequent iteration to get an acceptable result.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With