Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Inaccuracy of decimal in .NET

Yesterday during debugging something strange happened to me and I can't really explain it:

Decimal calculation

Decimal calculations with brackets

So maybe I am not seeing the obvious here or I misunderstood something about decimals in .NET but shouldn't the results be the same?

like image 543
10rotator01 Avatar asked Aug 25 '15 07:08

10rotator01


People also ask

Can I use decimal in C#?

In C#, Decimal Struct class is used to represent a decimal floating-point number. The range of decimal numbers is +79,228,162,514,264,337,593,543,950,335 to -79,228,162,514,264,337,593,543,950,335.

Should I use decimal or double C#?

Use double for non-integer math where the most precise answer isn't necessary. Use decimal for non-integer math where precision is needed (e.g. money and currency). Use int by default for any integer-based operations that can use that type, as it will be more performant than short or long .

How does .NET store decimal data?

Variables of the Decimal type are stored internally as integers in 16 bytes and are scaled by a power of 10. The scaling power determines the number of decimal digits to the right of the floating point, and it's an integer value from 0 to 28.


1 Answers

decimal is not a magical do all the maths for me type. It's still a floating point number - the main difference from float is that it's a decimal floating point number, rather than binary. So you can easily represent 0.3 as a decimal (it's impossible as a finite binary number), but you don't have infinite precision.

This makes it work much closer to a human doing the same calculations, but you still have to imagine someone doing each operation individually. It's specifically designed for financial calculations, where you don't do the kind of thing you do in Maths - you simply go step by step, rounding each result according to pretty specific rules.

In fact, for many cases, decimal might work much worse than float (or better, double). This is because decimal doesn't do any automatic rounding at all. Doing the same with double gives you 22 as expected, because it's automatically assumed that the difference doesn't matter - in decimal, it does - that's one of the important points about decimal. You can emulate this by inserting manual Math.Rounds, of course, but it doesn't make much sense.

like image 194
Luaan Avatar answered Sep 25 '22 19:09

Luaan