In this Code Review answer:
https://codereview.stackexchange.com/a/59405/11633
I found the following (nested quote ahead!):
Let me quote the wonderful book Numerical Recipes in C++ (but also applicable)
We assume that you know enough never to evaluate a polynomial this way:
p=c[0]+c[1]*x+c[2]*x*x+c[3]*x*x*x+c[4]*x*x*x*x;
or (even worse!),
p=c[0]+c[1]*x+c[2]*pow(x,2.0)+c[3]*pow(x,3.0)+c[4]*pow(x,4.0);
Come the (computer) revolution, all persons found guilty of such criminal behavior will be summarily executed, and their programs won’t be!
(You can find the page in your edition in the analytical index, under the entry "puns, particullary bad". I love this book.)
There are two reasons not to do that: accuracy and performance. The correct way to evaluate the polynomial is like this:
-t * (0.319381530 + t * (-0.356563782 + t * (1.781477937 + t * (-1.821255978 + 1.330274429 * t))))
I can see the severe performance penalty of implementing it in any of the discouraged ways, but not the accuracy penalty. How is it bad for accuracy?
I found the book, but not this information anywhere around the quoted bit.
Each floating point operation is just an approximation. This method uses less operations, therefore the result is more accurate.
It has another advantage when you have very large or small numbers. Assuming all the coefficients are in the same order of magnitude, all the terms there are of the same order too. If you evaluate a polynomial of order 5 with coefficients around 1 at x=0.1, the straightforward way would be adding 0.1 to 10^-5, loosing accuracy.
By the way, this is called Horner's scheme.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With