Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Floating-point optimizations - guideline

The majority of scientific computing problems that we need solve by implementing a particular algorithm in C/C++ demands accuracy that are much lower than double precision. For example, 1e-6, 1e-7 accuracy covers 99% of the cases for ODE solvers or numerical integration. Even in the rare cases when we do need higher accuracy, usually the numerical method itself fail before we can dream of reaching an accuracy that is near double precision. Example: we can't expect 1e-16 accuracy from a simple Runge–Kutta method even when solving a standard nostiff ordinary differential equation because of roundoff errors. In this case, the double precision requirement is analogous of as asking to have a better approximation of the wrong answer.

Then, aggressive float point optimizations seems to be a win-win situation in most cases because it makes your code faster (a lot faster!) and it does not affect the target accuracy of your particular problem. That said, it seems remarkable difficult to make sure that a particular implementation/code is stable against fp optimizations. Classical (and somewhat disturbing) example: GSL, the GNU scientific library, is not only the standard numerical library in the market but also it is a very well written library (I can't imagine myself doing a better job). However, GSL is not stable against fp optimizations. In fact, if you compile GSL with intel compiler, for example, then its internal tests will fail unless you turn on -fp-model strict flag which turn off fp optimizations.

Thus, my question is: are there general guidelines for writing code that is stable against aggressive floating point optimizations. Are these guidelines language (compiler) specific. If so, what are the C/C++ (gcc/icc) best practices?

Note 1: This question is not asking what are the fp optimizations flags in gcc/icc.

Note 2: This question is not asking about general guidelines for C/C++ optimization (like don't use virtual functions for small functions that are called a lot).

Note 3: This question is not asking the list of most standard fp optimizations (like x/x -> 1).

Note 4: I strongly believe this is NOT a subjective/off-topic question similar to the classical "The Coolest Server Names". If you disagree (because I am not providing a concrete example/code/problem), please flag it as community wiki. I am much more interested in the answer than gaining a few status points (not they are not important - you get the point!).

like image 827
Vivian Miranda Avatar asked Oct 27 '13 13:10

Vivian Miranda


1 Answers

Compiler makers justify the -ffast-math kind of optimizations with the assertion that these optimizations' influence over numerically stable algorithms is minimal.

Therefore, if you want to write code that is robust against these optimizations, a sufficient condition is to write only numerically stable code.

Now your question may be, “How do I write numerically stable code?”. This is where your question may be a bit broad: there are entire books dedicated to the subject. The Wikipedia page I already linked to has a good example, and here is another good one. I could not recommend a book in particular, this is not my area of expertise.

Note 1: Numerical stability's desirability goes beyond compiler optimization. If you have choice, write numerically stable code even if you do not plan to use -ffast-math-style optimizations. Numerically unstable code may provide wrong results even when compiled with strict IEEE 754 floating-point semantics.

Note 2: you cannot expect external libraries to work when compiled with -ffast-math-style flags. These libraries, written by floating-point experts, may need to play subtle tricks with the properties of IEEE 754 computations. This kind of trick may be broken by -ffast-math optimizations, but they improve performance more than you could expect the compiler to even if you let it. For floating-point computations, expert with domain knowledge beats compiler every time. On example amongst many is the triple-double implementation found in CRlibm. This code breaks if it is not compiled with strict IEEE 754 semantics. Another, more elementary algorithm that compiler optimizations break is Kahan summation: when compiled with unsafe optimizations, c = (t - sum) - y is optimized to c = 0. This, of course, defeats the purpose of the algorithm completely.

like image 86
Pascal Cuoq Avatar answered Oct 14 '22 10:10

Pascal Cuoq