Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why is decimal more precise than double if it has a shorter range? C#

Im reading the C# in a Nutshell book and it shows this table: enter image description here

Im having a hard time understanding the table. It says that double takes 64 bits of space and it ranges from 10^-324 to 10^308. Decimal takes 128 bits of space BUT is also says that it ranges from 10^-28 to 10^28. So what im understanding here is that decimal takes more space but provides a shorter range? this doesnt make much sense in my head since everyone agrees that decimal should be use when precision is required. Also when doing a calculation like = (1/3)*3, the desire result is 1, but only float and double give me 1, decimal gives me 0.9999... So why is decimal more precise? I dont really understand.

like image 822
Teler Avatar asked Nov 01 '18 19:11

Teler


People also ask

What is more precise than double in C?

In C and related programming languages, long double refers to a floating-point data type that is often more precise than double precision though the language standard only requires it to be at least as precise as double .

Does more decimal points mean more precise?

Thus, as the length of the unit increases, the measure becomes less precise. The number of decimal places in a measurement also affects precision. A time of 12.1 seconds is more precise than a time of 12 seconds; it implies a measure precise to the nearest tenth of a second.

What is the difference between decimal and double in C#?

Double (aka double): A 64-bit floating-point number. Decimal (aka decimal): A 128-bit floating-point number with a higher precision and a smaller range than Single or Double.

Are Doubles more precise?

Double is more precise than float and can store 64 bits, double of the number of bits float can store.


2 Answers

what I'm understanding here is that decimal takes more space but provides a shorter range?

Correct. It provides higher precision and smaller range. Plainly if you have a limited number of bits, you can increase precision only by decreasing range!

everyone agrees that decimal should be use when precision is required

Since that statement is false -- in particular, I do not agree with it -- any conclusion you draw from it is not sound.

The purpose of using decimal is not higher precision. It is smaller representation error. Higher precision is one way to achieve smaller representation error, but decimal does not achieve its smaller representation error by being higher precision. It achieves its smaller representation error by exactly representing decimal fractions.

Decimal is for those scenarios where the representation error of a decimal fraction must be zero, such as a financial computation.

Also when doing a calculation like = (1/3)*3, the desire result is 1, but only float and double give me 1

You got lucky. There are lots of fractions where the representation error of that computation is non-zero for both floats and doubles.

Let's do a quick check to see how many there are. We'll just make a million rationals and see:

    var q = from x in Enumerable.Range(1, 1000)
            from y in Enumerable.Range(1, 1000)
            where ((double)x)/y*y != x
            select x + " " + y;
    Console.WriteLine(q.Count()); // 101791

Over 10% of all small-number rationals are represented as doubles with sufficiently large representation error that they do not turn back into whole numbers when multiplied by their denominator!

If your desire is to do exact arithmetic on arbitrary rationals then neither double nor decimal are the appropriate type to use. Use a big-rational library if you need to exactly represent rationals.

why is decimal more precise?

Decimal is more precise than double because it has more bits of precision.

But again, precision is not actually that relevant. What is relevant is that decimal has smaller representation error than double for many common fractions.

It has smaller representation error than double for representing fractions with a small power of ten in the denominator because it was designed specifically to have zero representation error for all fractions with a small power of ten in the denominator.

That's why it is called "decimal", because it represents fractions with powers of ten. It represents the decimal system, which is the system we commonly use for arithmetic.

Double, in contrast, was explicitly not designed to have small representation error. Double was designed to have the range, precision, representation error and performance that is appropriate for physics computations.

There is no bias towards exact decimal quantities in physics. There is such a bias in finance. Use decimals for finance. Use doubles for physics.

like image 155
Eric Lippert Avatar answered Sep 24 '22 02:09

Eric Lippert


Decimal does not use an exponent to increase the range, it contains all digits (about 29), where double uses an exponent, but only contains about 19 digits, the rest is thrown away. The exponent increases the range, but not the accuracy.

like image 40
Poul Bak Avatar answered Sep 21 '22 02:09

Poul Bak