Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

-0.1.ToString("0") is "-0" in .NET Core and "0" in .NET Framework [duplicate]

Here is a sample piece of code with outputs from .net core 2.2 and 3.1. It shows different computational results for a basic floating point expression a^b.

In this example we calculate 1.9 to the power of 3. Previous .NET frameworks yielded the correct result, but .net core 3.0 and 3.1 yields a different result.

Is this an intended change and how can we migrate financial calculation code to the new version with a guarantee that numerical calculations will still yield the same results? (It would be nice if .NET had a decimal Math library too).

    public static class Program
    {
        public static void Main(string[] args)
        {
            Console.WriteLine("--- Decimal ---------");
            ComputeWithDecimalType();
            Console.WriteLine("--- Double ----------");
            ComputeWithDoubleType();

            Console.ReadLine();
        }

        private static void ComputeWithDecimalType()
        {
            decimal a = 1.9M;
            decimal b = 3M;
            decimal c = a * a * a;
            decimal d = (decimal) Math.Pow((double) a, (double) b);

            Console.WriteLine($"a * a * a                        = {c}");
            Console.WriteLine($"Math.Pow((double) a, (double) b) = {d}");
        }

        private static void ComputeWithDoubleType()
        {
            double a = 1.9;
            double b = 3;
            double c = a * a * a;
            double d = Math.Pow(a, b);

            Console.WriteLine($"a * a * a      = {c}");
            Console.WriteLine($"Math.Pow(a, b) = {d}");
        }
    }

.NET Core 2.2

--- Decimal ---------

a * a * a                        = 6.859
Math.Pow((double) a, (double) b) = 6.859

--- Double ----------

a * a * a      = 6.859
Math.Pow(a, b) = 6.859

.NET Core 3.1

--- Decimal ---------

a * a * a                        = 6.859
Math.Pow((double) a, (double) b) = 6.859

--- Double ----------

a * a * a      = 6.858999999999999
Math.Pow(a, b) = 6.858999999999999
like image 939
Francois Malan Avatar asked Jan 17 '20 05:01

Francois Malan


People also ask

What is the difference between .NET Core and .NET Framework and .NET standard?

. Net Core does not support desktop application development and it rather focuses on the web, windows mobile, and windows store. . Net Framework is used for the development of both desktop and web applications as well as it supports windows forms and WPF applications.

When should we use .NET Core and .NET standard class library project types?

Use a . NET Core library when you want to increase the . NET API surface area your library can access, and you are okay with allowing only . NET Core applications to be compatible with your library.

Can .NET Core Reference .NET standard?

NET Core wouldn't be able to reference a . NET Standard project.

Should I use .NET standard or core?

A cross-platform and open-source framework, . NET Core is best when developing applications on any platform. . NET Core is used for cloud applications or refactoring large enterprise applications into microservices. You should use .


1 Answers

.NET Core introduced a lot of floating point parsing and formatting improvements in IEEE floating point compliance. One of them is IEEE 754-2008 formatting compliance.

Before .NET Core 3.0, ToString() internally limited precision to "just" 15 places, producing string that couldn't be parsed back to the original. The question's values differ by a single bit.

In both .NET 4.7 and .NET Core 3, the actual bytes remains the same. In both cases, calling

BitConverter.GetBytes(d*d*d)

Produces

85, 14, 45, 178, 157, 111, 27, 64

On the other hand, BitConverter.GetBytes(6.859) produces :

86, 14, 45, 178, 157, 111, 27, 64

Even in .NET Core 3, parsing "6.859" produces the second byte sequence :

BitConverter.GetBytes(double.Parse("6.859"))

This is a single bit difference. The old behavior produced a string that couldn't be parsed back to the original value

The difference is explained by this change :

ToString(), ToString("G"), and ToString("R") will now return the shortest roundtrippable string. This ensures that users end up with something that just works by default.

That's why we always need to specify a precision when dealing with floating point numbers. There were improvements in this case too :

For the "G" format specifier that takes a precision (e.g. G3), the precision specifier is now always respected. For double with precisions less than 15 (inclusive) and for float with precisions less than 6 (inclusive) this means you get the same string as before. For precisions greater than that, you will get up to that many significant digits

Using ToString("G15") produces 6.859 while ToString("G16") produces 6.858999999999999, which has 16 fractional digits.

That's a reminder that we always need to specify a precision when working with floating point numbers, whether it's comparing or formatting

like image 65
Panagiotis Kanavos Avatar answered Oct 02 '22 18:10

Panagiotis Kanavos