Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

When should I use double instead of decimal?

People also ask

Should I use double or decimal?

Use double for non-integer math where the most precise answer isn't necessary. Use decimal for non-integer math where precision is needed (e.g. money and currency). Use int by default for any integer-based operations that can use that type, as it will be more performant than short or long .

What is difference between double and decimal?

Double uses 64 bits to represent data. Decimal uses 128 bits to represent data.

When should double be used?

Double is more precise than float and can store 64 bits, double of the number of bits float can store. Double is more precise and for storing large numbers, we prefer double over float. For example, to store the annual salary of the CEO of a company, double will be a more accurate choice.

Should I use float double or decimal?

Use float or double ? The precision of a floating point value indicates how many digits the value can have after the decimal point. The precision of float is only six or seven decimal digits, while double variables have a precision of about 15 digits. Therefore it is safer to use double for most calculations.


I think you've summarised the advantages quite well. You are however missing one point. The decimal type is only more accurate at representing base 10 numbers (e.g. those used in currency/financial calculations). In general, the double type is going to offer at least as great precision (someone correct me if I'm wrong) and definitely greater speed for arbitrary real numbers. The simple conclusion is: when considering which to use, always use double unless you need the base 10 accuracy that decimal offers.

Edit:

Regarding your additional question about the decrease in accuracy of floating-point numbers after operations, this is a slightly more subtle issue. Indeed, precision (I use the term interchangeably for accuracy here) will steadily decrease after each operation is performed. This is due to two reasons:

  1. the fact that certain numbers (most obviously decimals) can't be truly represented in floating point form
  2. rounding errors occur, just as if you were doing the calculation by hand. It depends greatly on the context (how many operations you're performing) whether these errors are significant enough to warrant much thought however.

In all cases, if you want to compare two floating-point numbers that should in theory be equivalent (but were arrived at using different calculations), you need to allow a certain degree of tolerance (how much varies, but is typically very small).

For a more detailed overview of the particular cases where errors in accuracies can be introduced, see the Accuracy section of the Wikipedia article. Finally, if you want a seriously in-depth (and mathematical) discussion of floating-point numbers/operations at machine level, try reading the oft-quoted article What Every Computer Scientist Should Know About Floating-Point Arithmetic.


You seem spot on with the benefits of using a floating point type. I tend to design for decimals in all cases, and rely on a profiler to let me know if operations on decimal is causing bottlenecks or slow-downs. In those cases, I will "down cast" to double or float, but only do it internally, and carefully try to manage precision loss by limiting the number of significant digits in the mathematical operation being performed.

In general, if your value is transient (not reused), you're safe to use a floating point type. The real problem with floating point types is the following three scenarios.

  1. You are aggregating floating point values (in which case the precision errors compound)
  2. You build values based on the floating point value (for example in a recursive algorithm)
  3. You are doing math with a very wide number of significant digits (for example, 123456789.1 * .000000000000000987654321)

EDIT

According to the reference documentation on C# decimals:

The decimal keyword denotes a 128-bit data type. Compared to floating-point types, the decimal type has a greater precision and a smaller range, which makes it suitable for financial and monetary calculations.

So to clarify my above statement:

I tend to design for decimals in all cases, and rely on a profiler to let me know if operations on decimal is causing bottlenecks or slow-downs.

I have only ever worked in industries where decimals are favorable. If you're working on phsyics or graphics engines, it's probably much more beneficial to design for a floating point type (float or double).

Decimal is not infinitely precise (it is impossible to represent infinite precision for non-integral in a primitive data type), but it is far more precise than double:

  • decimal = 28-29 significant digits
  • double = 15-16 significant digits
  • float = 7 significant digits

EDIT 2

In response to Konrad Rudolph's comment, item # 1 (above) is definitely correct. Aggregation of imprecision does indeed compound. See the below code for an example:

private const float THREE_FIFTHS = 3f / 5f;
private const int ONE_MILLION = 1000000;

public static void Main(string[] args)
{
    Console.WriteLine("Three Fifths: {0}", THREE_FIFTHS.ToString("F10"));
    float asSingle = 0f;
    double asDouble = 0d;
    decimal asDecimal = 0M;

    for (int i = 0; i < ONE_MILLION; i++)
    {
        asSingle += THREE_FIFTHS;
        asDouble += THREE_FIFTHS;
        asDecimal += (decimal) THREE_FIFTHS;
    }
    Console.WriteLine("Six Hundred Thousand: {0:F10}", THREE_FIFTHS * ONE_MILLION);
    Console.WriteLine("Single: {0}", asSingle.ToString("F10"));
    Console.WriteLine("Double: {0}", asDouble.ToString("F10"));
    Console.WriteLine("Decimal: {0}", asDecimal.ToString("F10"));
    Console.ReadLine();
}

This outputs the following:

Three Fifths: 0.6000000000
Six Hundred Thousand: 600000.0000000000
Single: 599093.4000000000
Double: 599999.9999886850
Decimal: 600000.0000000000

As you can see, even though we are adding from the same source constant, the results of the double is less precise (although probably will round correctly), and the float is far less precise, to the point where it has been reduced to only two significant digits.


Use decimal for base 10 values, e.g. financial calculations, as others have suggested.

But double is generally more accurate for arbitrary calculated values.

For example if you want to calculate the weight of each line in a portfolio, use double as the result will more nearly add up to 100%.

In the following example, doubleResult is closer to 1 than decimalResult:

// Add one third + one third + one third with decimal
decimal decimalValue = 1M / 3M;
decimal decimalResult = decimalValue + decimalValue + decimalValue;
// Add one third + one third + one third with double
double doubleValue = 1D / 3D;
double doubleResult = doubleValue + doubleValue + doubleValue;

So again taking the example of a portfolio:

  • The market value of each line in the portfolio is a monetary value and would probably be best represented as decimal.

  • The weight of each line in the portfolio (= Market Value / SUM(Market Value)) is usually better represented as double.


Use a double or a float when you don't need precision, for example, in a platformer game I wrote, I used a float to store the player velocities. Obviously I don't need super precision here because I eventually round to an Int for drawing on the screen.


In some Accounting, consider the possibility of using integral types instead or in conjunction. For example, let say that the rules you operate under require every calculation result carry forward with at least 6 decimal places and the final result will be rounded to the nearest penny.

A calculation of 1/6th of $100 yields $16.66666666666666..., so the value carried forth in a worksheet will be $16.666667. Both double and decimal should yield that result accurately to 6 decimal places. However, we can avoid any cumulative error by carrying the result forward as an integer 16666667. Each subsequent calculation can be made with the same precision and carried forward similarly. Continuing the example, I calculate Texas sales tax on that amount (16666667 * .0825 = 1375000). Adding the two (it's a short worksheet) 1666667 + 1375000 = 18041667. Moving the decimal point back in gives us 18.041667, or $18.04.

While this short example wouldn't yield a cumulative error using double or decimal, it's fairly easy to show cases where simply calculating the double or decimal and carrying forward would accumulate significant error. If the rules you operate under require a limited number of decimal places, storing each value as an integer by multiplying by 10^(required # of decimal place), and then dividing by 10^(required # of decimal places) to get the actual value will avoid any cumulative error.

In situations where fractions of pennies do not occur (for example, a vending machine), there is no reason to use non-integral types at all. Simply think of it as counting pennies, not dollars. I have seen code where every calculation involved only whole pennies, yet use of double led to errors! Integer only math removed the issue. So my unconventional answer is, when possible, forgo both double and decimal.