When I use power functions like Math.Pow(double x, double y) in C# or the math.h pow-function in C++ do these run in constant time?
The reason I am asking is because I want to know if a "precalculated" bézier-function on the form (1-t)^n*p0 + ... + t^(n) * pN could run in linear time which could then be faster than an implementation of De Casteljaus algorithm taking control points and t as parameters.
I think that these methods use iteration based processing to get the result, and only stop when the difference between the value of two iterations fall bellow a given error-constant.
There are iterative methods that converge very fastly to the result of a power operation... so I think they are near constant time.
This question has a lot of great explanations: How is Math.Pow() implemented in .NET Framework?
EDIT
I have found a lot of good meterial to work with in http://math.stackexchange.com.
This one is very interesting, as it explains a way of calculating exponentiation using human language:
Thoughts
I'm not a math genius, but as far as I can see, the time it takes does not depend a lot on the values you choose, but on the number of precise digits you want. What I'm trying to say is that it depends on the arguments, but there is a maximum.
Also, to support this theory, take a look at this algorithm (implemented by Sun): http://pastebin.com/LDjS5mAR. There is no loops, only ifs. I think that it is because the guys who implemented it, chose a fixed precision they wanted... and then expanded all the iterations needed to guarantee that precision.
For instance, a loop of invariant number of iterations can be easyly expanded like this:
for (int it = 0; it < 5; it++)
a *= a;
Is the same as:
a *= a; a *= a; a *= a; a *= a; a *= a;
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With