Here is a sample piece of code with outputs from .net core 2.2 and 3.1. It shows different computational results for a basic floating point expression a^b.
In this example we calculate 1.9 to the power of 3. Previous .NET frameworks yielded the correct result, but .net core 3.0 and 3.1 yields a different result.
Is this an intended change and how can we migrate financial calculation code to the new version with a guarantee that numerical calculations will still yield the same results? (It would be nice if .NET had a decimal Math library too).
public static class Program
{
public static void Main(string[] args)
{
Console.WriteLine("--- Decimal ---------");
ComputeWithDecimalType();
Console.WriteLine("--- Double ----------");
ComputeWithDoubleType();
Console.ReadLine();
}
private static void ComputeWithDecimalType()
{
decimal a = 1.9M;
decimal b = 3M;
decimal c = a * a * a;
decimal d = (decimal) Math.Pow((double) a, (double) b);
Console.WriteLine($"a * a * a = {c}");
Console.WriteLine($"Math.Pow((double) a, (double) b) = {d}");
}
private static void ComputeWithDoubleType()
{
double a = 1.9;
double b = 3;
double c = a * a * a;
double d = Math.Pow(a, b);
Console.WriteLine($"a * a * a = {c}");
Console.WriteLine($"Math.Pow(a, b) = {d}");
}
}
.NET Core 2.2
--- Decimal ---------
a * a * a = 6.859
Math.Pow((double) a, (double) b) = 6.859
--- Double ----------
a * a * a = 6.859
Math.Pow(a, b) = 6.859
.NET Core 3.1
--- Decimal ---------
a * a * a = 6.859
Math.Pow((double) a, (double) b) = 6.859
--- Double ----------
a * a * a = 6.858999999999999
Math.Pow(a, b) = 6.858999999999999
.NET Core introduced a lot of floating point parsing and formatting improvements in IEEE floating point compliance. One of them is IEEE 754-2008 formatting compliance.
Before .NET Core 3.0, ToString()
internally limited precision to "just" 15 places, producing string that couldn't be parsed back to the original. The question's values differ by a single bit.
In both .NET 4.7 and .NET Core 3, the actual bytes remains the same. In both cases, calling
BitConverter.GetBytes(d*d*d)
Produces
85, 14, 45, 178, 157, 111, 27, 64
On the other hand, BitConverter.GetBytes(6.859)
produces :
86, 14, 45, 178, 157, 111, 27, 64
Even in .NET Core 3, parsing "6.859" produces the second byte sequence :
BitConverter.GetBytes(double.Parse("6.859"))
This is a single bit difference. The old behavior produced a string that couldn't be parsed back to the original value
The difference is explained by this change :
ToString(), ToString("G"), and ToString("R") will now return the shortest roundtrippable string. This ensures that users end up with something that just works by default.
That's why we always need to specify a precision when dealing with floating point numbers. There were improvements in this case too :
For the "G" format specifier that takes a precision (e.g. G3), the precision specifier is now always respected. For double with precisions less than 15 (inclusive) and for float with precisions less than 6 (inclusive) this means you get the same string as before. For precisions greater than that, you will get up to that many significant digits
Using ToString("G15")
produces 6.859
while ToString("G16")
produces 6.858999999999999
, which has 16 fractional digits.
That's a reminder that we always need to specify a precision when working with floating point numbers, whether it's comparing or formatting
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With