Output of the following code:
var a = 0.1;
var count = 1;
while (a > 0)
{
if (count == 323)
{
var isZeroA = (a * 0.1) == 0;
var b = a * 0.1;
var isZeroB = b == 0;
Console.WriteLine("IsZeroA: {0}, IsZeroB: {1}", isZeroA, isZeroB);
}
a *= 0.1;
++count;
}
is
Strangely, when I put a breakpoint after if (count == 323)
while debugging and put expression (a * 0.1) == 0
in Visual Studio Watch window, it reports that expression is true
.
Does anyone know why expression a * 0.1
is not zero, but when assigned to a variable b
, then b
is zero?
This does not happen with my particular hardware and CLR version. Edit: Oh yes, it happen to me too, if I use "x86" (or "Any CPU" with "Prefer 32-bit" enabled) and "Debug" mode.
The reason why things like that may sometimes happen, is that the system may hold the value in a 80-bit CPU registry where it has "extra" precision. But when put into a real 64-bit Double
, it changes value.
If you change into:
var isZeroA = (double)(a * 0.1) == 0;
then formally you really change nothing (a cast from double
to double
!), but in reality that may force the run-time to convert from 80-bit to 64-bit. Does it change the output for you? Edit: This "no-op" cast changes something for me! For more on such cast-to-self tricks with floating-point types in C#, see other thread Casting a result to float in method returning float changes result.
Note that Double
arithmetic is not deterministic (i.e. the same calculation can give different results when repeated) because of these 64-bit/80-bit issues. See other thread Is floating-point math consistent in C#? Can it be?
The following simpler program also shows the issue in cases where it is present (at least on my system):
double j = 9.88131291682493E-324;
Console.WriteLine(j * 0.1 == 0); // "False"
double k = j * 0.1;
Console.WriteLine(k == 0); // "True"
Console.WriteLine((double)(j * 0.1) == 0); // "True", double-to-double cast!
You can even start with j = 1E-323
in that code. It leads to the same Double
.
Reference: The often cited document What Every Computer Scientist Should Know About Floating-Point Arithmetic by David Goldberg appears in the internet with an added section Differences Among IEEE 754 Implementations by an anonymous author (which is not Goldberg). This section, Differences Among IEEE 754 Implementations, explains the issue you see in a technical manner.
Also see x86 Extended Precision Format (Wikipedia page section) about this 80-bit format.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With