Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Are there compilers that optimise floating point operations for accuracy (as opposed to speed)?

We know that compilers are getting better and better at optimising our code and make it run faster, but my question are there compilers that can optimise floating point operations to ensure greater accuracy.

For example a basic rule is to perform multiplications before addition, this is because multiplication and division using floating point numbers does not introduce inaccuracies as great as that of addition and subtraction but can increase the magnitude of inaccuracies introduced by addition and subtraction, so it should be done first in many cases.

So a floating point operation like

y = x*(a + b); // faster but less accurate

Should be changed to

y = x*a + x*b; // slower but more accurate

Are there any compilers that will optimise for improved floating point accuracy at the expense of speed like I showed above? Or is the main concern of compilers speed with out looking at accuracy of floating point operations?

Thanks

Update: The selected answer, showed a very good example where this type of optimisation would not work, so it wouldn't be possible for the compiler to know before hand what is the more accurate way to evaluate y. Thanks for the counter example.

like image 978
hhafez Avatar asked Jan 14 '10 00:01

hhafez


2 Answers

Your premise is faulty. x*(a + b), is (in general) no less accurate than x*a + x*b. In fact, it will often be more accurate, because it performs only two floating point operations (and therefore incurs only two rounding errors), whereas the latter performs three operations.

If you know something about the expected distribution of values for x, a, and b a priori, then you could make an informed decision, but compilers almost never have access to that type of information.

That aside, what if the person writing the program actually meant x*(a+b) and specifically wanted the exactly roundings that are caused by that particular sequence of operations? This sort of thing is actually pretty common in high-quality numerical algorithms.

Better to do what the programmer wrote, not what you think he might have intended.

Edit -- An example to illustrate a case where the transformation you suggested results in a catastrophic loss of accuracy: suppose

x = 3.1415926535897931
a = 1.0e15
b = -(1.0e15 - 1.0)

Then, evaluating in double we get:

x*(a + b) = 3.1415926535897931

but

x*a + x*b = 3.0
like image 59
Stephen Canon Avatar answered Oct 06 '22 06:10

Stephen Canon


Compilers typically "optimize" for accuracy over speed, accuracy defined as exact implementation of the IEEE 754 standard. Whereas integer operations can be reordered in any way that doesn't cause overflow, FP operations need to be performed exactly as the programmer specifies. This may sacrifice numerical accuracy (ordinary C compilers are not equipped to optimize for that) but faithfully implements the what the programmer asked.

A programmer who is sure he hasn't manually optimized for accuracy may enable compiler features like GCC's -funsafe-math-optimizations and -ffinite-math-only to possibly extract extra speed. But usually there isn't much gain.

like image 20
Potatoswatter Avatar answered Oct 06 '22 06:10

Potatoswatter