I've been warned by numerous programmers not to use the square root function, and instead to raise numbers to the half power. My question is twofold:
What is the perceived/real performance benefit to doing this? Why is it faster?
If it really is faster, why does the square root function even exist?
sqrt() is "slow," not slow. Fundamentally, the operation is more complex than an addition or division.
Growth rates of functions Any root function grows faster than any power of the natural log function.
It has a major use in the formula for roots of a quadratic equation; quadratic fields and rings of quadratic integers, which are based on square roots, are important in algebra and have uses in geometry. Square roots frequently appear in mathematical formulas elsewhere, as well as in many physical laws.
It may be a bit hard to picture it, but square roots are some of the most useful numbers around. Square root functions are super important for physics equations of all kinds. They're also valuable for statistics; statisticians use square roots all the time in analyzing the correlation between different points of data.
I've performed a simple test:
Stopwatch sw = new Stopwatch();
sw.Start();
Double s = 0.0;
// compute 1e8 times either Sqrt(x) or its emulation as Pow(x, 0.5)
for (Double d = 0; d < 1e8; d += 1)
// s += Math.Sqrt(d); // <- uncomment it to test Sqrt
s += Math.Pow(d, 0.5); // <- uncomment it to test Pow
sw.Stop();
Console.Out.Write(sw.ElapsedMilliseconds);
The (averaged) outcome at my workstation (x64) is
Sqrt: 950 ms
Pow: 5500 ms
As you can see, more specific Sqrt(x)
5.5 times faster than its emulation Pow(x, 0.5)
. So it's just one more legend (at least in C#) that Sqrt
is that slow one should prefer Pow
substitution
You would have to know something about how each function is implemented to answer the question.
The square root function uses Newton's method to iteratively calculate the square root. It converges quadratically. Nothing will speed that up.
The other functions, exp() and ln(x), have implementations that have their own convergence/complexity issues. For example, it's possible to implement both as series sums. A certain number of terms are required to maintain sufficient accuracy.
All bets are off if those functions happen to be implemented in native code. Those might be faster than anything you'll write.
Knowing those would let you make an informed decision. I would not take it on faith because those programmers "know" the answer.
Unless you're doing intensive numerical work, I'd say that the choice won't affect your overall program performance. It's micro-optimization that's best avoided, unless you're doing serious large-scale scientific programming.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With