Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Floating point inconsistency between expression and assigned object

This surprised me - the same arithmetic gives different results depending on how its executed:

> 0.1f+0.2f==0.3f
False

> var z = 0.3f;
> 0.1f+0.2f==z
True

> 0.1f+0.2f==(dynamic)0.3f
True

(Tested in Linqpad)

What's going on?


Edit: I understand why floating point arithmetic is imprecise, but not why it would be inconsistent.

The venerable C reliably confirms that 0.1 + 0.2 == 0.3 holds for single-precision floats, but not double-precision floating points.

like image 999
Colonel Panic Avatar asked Nov 08 '12 14:11

Colonel Panic


People also ask

What is the main problem with floating point representation?

The root of the problem is based in the fact that it is impossible to represent an infinite collection in a finite space. Just like the decimal system has problems representing fractions, such as “⅓”, binary representation must overcome similar obstacles.

Why are floating point calculations so inaccurate?

Floating-point decimal values generally do not have an exact binary representation. This is a side effect of how the CPU represents floating point data. For this reason, you may experience some loss of precision, and some floating-point operations may produce unexpected results.

What are the limitations of floating point representation?

Floating-point numbers suffer from a loss of precision when represented with a fixed number of bits (e.g., 32-bit or 64-bit). This is because there is an infinite amount of real numbers, even within a small range like 0.0 to 0.1.


1 Answers

I strongly suspect you may find that you get different results running this code with and without the debugger, and in release configuration vs in debug configuration.

In the first version, you're comparing two expressions. The C# language allows those expressions to be evaluated in higher precision arithmetic than the source types.

In the second version, you're assigning the addition result to a local variable. In some scenarios, that will force the result to be truncated down to 32 bits - leading to a different result. In other scenarios, the CLR or C# compiler will realize that it can optimize away the local variable.

From section 4.1.6 of the C# 4 spec:

Floating point operations may be performed with higher precision than the result type of the operation. For example, some hardware architectures support an "extended" or "long double" floating point type with greater range and precision than the double type, and implicitly perform all floating point operations with the higher precision type. Only at excessive cost in performance can such hardware architectures be made to perform floating point operations with less precision. Rather than require an implementation to forfeit both performance and precision, C# allows a higher precision type to be used for all floating point operations. Other than delivering more precise results, this rarely has any measurable effects.

EDIT: I haven't tried compiling this, but in the comments, Chris says the first form isn't being evaluated at execution time at all. The above can still apply (I've tweaked my wording slightly) - it's just shifted the evaluation time of a constant from execution time to compile-time. So long as it behaves the same way as a valid evaluation, that seems okay to me - so the compiler's own constant expression evaluation can use higher-precision arithmetic too.

like image 158
Jon Skeet Avatar answered Oct 23 '22 08:10

Jon Skeet