Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is floating point arithmetic stable? [duplicate]

I know that floating point numbers have precision and the digits after the precision is not reliable.

But what if the equation used to calculate the number is the same? can I assume the outcome would be the same too?

for example we have two float numbers x and y. Can we assume the result x/y from machine 1 is exactly the same as the result from machine 2? I.E. == comparison would return true

like image 965
Steve Avatar asked Jan 22 '18 14:01

Steve


3 Answers

But what if the equation used to calculate the number is the same? can I assume the outcome would be the same too?

No, not necessarily.

In particular, in some situations the JIT is permitted to use a more accurate intermediate representation - e.g. 80 bits when your original data is 64 bits - whereas in other situations it won't. That can result in seeing different results when any of the following is true:

  • You have slightly different code, e.g. using a local variable instead of a field, which can change whether the value is stored in a register or not. (That's one relatively obvious example; there are other much more subtle ones which can affect things, such as the existence of a try block in the method...)
  • You are executing on a different processor (I used to observe differences between AMD and Intel; there can be differences between different CPUs from the same manufacturer too)
  • You are executing with different optimization levels (e.g. under a debugger or not)

From the C# 5 specification section 4.1.6:

Floating-point operations may be performed with higher precision than the result type of the operation. For example, some hardware architectures support an "extended" or "long double" floating-point type with greater range and precision than the double type, and implicitly perform all floating-point operations using this higher precision type. Only at excessive cost in performance can such hardware architectures be made to perform floating-point operations with less precision, and rather than require an implementation to forfeit both performance and precision, C# allows a higher precision type to be used for all floating-point operations. Other than delivering more precise results, this rarely has any measurable effects. However, in expressions of the form x * y / z, where the multiplication produces a result that is outside the double range, but the subsequent division brings the temporary result back into the double range, the fact that the expression is evaluated in a higher range format may cause a finite result to be produced instead of an infinity.

like image 137
Jon Skeet Avatar answered Nov 19 '22 06:11

Jon Skeet


Jon's answer is of course correct. None of the answers however have said how you can ensure that floating point arithmetic is done in the amount of precision guaranteed by the specification and not more.

C# automatically truncates any float back to its canonical 32 or 64 bit representation under the following circumstances:

  • You put in a redundant explicit cast: x + y might have x and y as higher-precision numbers that are then added. But (double)((double)x+(double)y) ensures that everything is truncated to 64 bit precision before and after the math happens.
  • Any store to an instance field of a class, static field, array element, or dereferenced pointer always truncates. (Stores to locals, parameters and temporaries are not guaranteed to truncate; they can be enregistered. Fields of a struct might be on the short-term pool which can also be enregistered.)

These guarantees are not made by the language specification, but implementations should respect these rules. The Microsoft implementations of C# and the CLR do.

It is a pain to write the code to ensure that floating point arithmetic is predictable in C# but it can be done. Note that doing so will likely slow down your arithmetic.

Complaints about this awful situation should be addressed to Intel, not Microsoft; they're the ones who designed chips that make doing predictable arithmetic slower.

Also, note that this is a frequently asked question. You might consider closing this as a duplicate of:

Why differs floating-point precision in C# when separated by parantheses and when separated by statements?

Why does this floating-point calculation give different results on different machines?

Casting a result to float in method returning float changes result

(.1f+.2f==.3f) != (.1f+.2f).Equals(.3f) Why?

Coercing floating-point to be deterministic in .NET?

C# XNA Visual Studio: Difference between "release" and "debug" modes?

C# - Inconsistent math operation result on 32-bit and 64-bit

Rounding Error in C#: Different results on different PCs

Strange compiler behavior with float literals vs float variables

like image 21
Eric Lippert Avatar answered Nov 19 '22 06:11

Eric Lippert


No, it does not. The outcome of the calculation can differ per CPU, because the implementation of floating point arithmetic can differ per CPU manufacturer or CPU design. I even remember a bug in the floating point arithmetic in some Intel processors, which screwed up our calculations.

And then there is difference in how the code is evaluated bu the JIT compiler.

like image 9
Patrick Hofman Avatar answered Nov 19 '22 05:11

Patrick Hofman