Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

In .net, how do I choose between a Decimal and a Double

We were discussing this the other day at work and I wish there was a Stackoverflow question I would point people at so here goes.)

  • What is the difference between a Double and a Decimal?
  • When (in what cases) should you always use a Double?
  • When (in what cases) should you always use a Decimal?
  • What’s the driving factors to consider in cases that don’t fall into one of the two camps above?

There are a lot of questions that overlap this question, but they tend to be asking what someone should do in a given case, not how to decide in the general case.

like image 322
Ian Ringrose Avatar asked Mar 30 '10 13:03

Ian Ringrose


People also ask

Should I use decimal or double in C#?

Use double for non-integer math where the most precise answer isn't necessary. Use decimal for non-integer math where precision is needed (e.g. money and currency). Use int by default for any integer-based operations that can use that type, as it will be more performant than short or long .

What is the difference between decimal float and double in C#?

The Decimal, Double, and Float variable types are different in the way that they store the values. Precision is the main difference where float is a single precision (32 bit) floating point data type, double is a double precision (64 bit) floating point data type and decimal is a 128-bit floating point data type.

What is the difference between decimal and double data type?

Double uses 64 bits to represent data. Decimal uses 128 bits to represent data.

Should I use float double or decimal?

Use float or double ? The precision of a floating point value indicates how many digits the value can have after the decimal point. The precision of float is only six or seven decimal digits, while double variables have a precision of about 15 digits. Therefore it is safer to use double for most calculations.


1 Answers

I usually think about natural vs artificial quantities.

Natural quantities are things like weight, height and time. These will never be measured absolutely accurately, and there's rarely any idea of absolutely exact arithmetic on it: you shouldn't generally be adding up heights and then making sure that the result is exactly as expected. Use double for this sort of quantity. Doubles have a huge range, but limited precision; they're also extremely fast.

The dominant artificial quantity is money. There is such a thing as "exactly $10.52", and if you add 48 cents to it you expect to have exactly $11. Use decimal for this sort of quantity. Justification: given that it's artificial to start with, the numbers involved are artificial too, designed to meet human needs - which means they're naturally expressed in base 10. Make the storage representation match the human representation. decimal doesn't have the range of double, but most artificial quantities don't need that extra range either. It's also slower than double, but I'd personally have a bank account which gave me the right answer slowly than a wrong answer quickly :)

For a bit more information, I have articles on .NET binary floating point types and the .NET decimal type. (Note that decimal is a floating point type too - but the "point" in question is a decimal point, not a binary point.)

like image 95
Jon Skeet Avatar answered Oct 30 '22 10:10

Jon Skeet