Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Wasn't the Double Type precision of 15 digits in C#?

I was testing this code from Brainteasers:

        double d1 = 1.000001;

        double d2 = 0.000001;

        Console.WriteLine((d1 - d2) == 1.0);

And the result is "False". When I change the data type:

        decimal d1 = 1.000001M;

        decimal d2 = 0.000001M;

        decimal d3 = d1-d2;

        Console.WriteLine(d3 == 1);

The program writes the correct answer: "True".

This problem just uses 6 digits after the floating point. What happened with the precision of 15 digits?

like image 598
Broken_Window Avatar asked Sep 08 '09 18:09

Broken_Window


2 Answers

This has nothing to do with precision - it has to do with representational rounding errors.

System.Decimal is capable of representing large floating point numbers with a significantly reduced risk of incurring any rounding errors like the one you are seeing. System.Single and System.Double are not capable of this and will round these numbers off and create issues like the one you are seeing in your example.

System.Decimal uses a scaling factor to hold the position of the decimal place thus allowing for exact representation of the given floating-point number, whereas System.Single and System.Double only approximate your value as best they can.

For more information, please see System.Double:

Remember that a floating-point number can only approximate a decimal number, and that the precision of a floating-point number determines how accurately that number approximates a decimal number. By default, a Double value contains 15 decimal digits of precision, although a maximum of 17 digits is maintained internally. The precision of a floating-point number has several consequences:

  • Two floating-point numbers that appear equal for a particular precision might not compare equal because their least significant digits are different.

  • A mathematical or comparison operation that uses a floating-point number might not yield the same result if a decimal number is used because the floating-point number might not exactly approximate the decimal number.

like image 157
Andrew Hare Avatar answered Sep 19 '22 06:09

Andrew Hare


Generally, the way to check for equality of floating-point values is to check for near-equality, i.e., check for a difference that is close to the smallest value (called epsilon) for that datatype. For example,

if (Math.Abs(d1 - d2) <= Double.Epsilon) ...

This tests to see if the d1 and d2 are represented by the same bit pattern give or take the least significant bit.

Correction (Added 2 Mar 2015)

Upon further examination, the code should be more like this:

// Assumes that d1 and d2 are not both zero
if (Math.Abs(d1 - d2) / Math.Max(Math.Abs(d1), Math.Abs(d2)) <= Double.Epsilon) ...

In other words, take the absolute difference between d1 and d2, then scale it by the largest of d1 and d2, and then compare it to Epsilon.

References
• http://msdn.microsoft.com/en-us/library/system.double.epsilon.aspx
• http://msdn.microsoft.com/en-us/library/system.double.aspx#Precision

like image 31
David R Tribble Avatar answered Sep 19 '22 06:09

David R Tribble