Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Decimal vs Double Speed

I write financial applications where I constantly battle the decision to use a double vs using a decimal.

All of my math works on numbers with no more than 5 decimal places and are not larger than ~100,000. I have a feeling that all of these can be represented as doubles anyways without rounding error, but have never been sure.

I would go ahead and make the switch from decimals to doubles for the obvious speed advantage, except that at the end of the day, I still use the ToString method to transmit prices to exchanges, and need to make sure it always outputs the number I expect. (89.99 instead of 89.99000000001)

Questions:

  1. Is the speed advantage really as large as naive tests suggest? (~100 times)
  2. Is there a way to guarantee the output from ToString to be what I want? Is this assured by the fact that my number is always representable?

UPDATE: I have to process ~ 10 billion price updates before my app can run, and I have implemented with decimal right now for the obvious protective reasons, but it takes ~3 hours just to turn on, doubles would dramatically reduce my turn on time. Is there a safe way to do it with doubles?

like image 901
Superman Avatar asked Nov 30 '08 23:11

Superman


People also ask

Should I use double or decimal?

Use double for non-integer math where the most precise answer isn't necessary. Use decimal for non-integer math where precision is needed (e.g. money and currency). Use int by default for any integer-based operations that can use that type, as it will be more performant than short or long .

What is the difference between decimal and double?

Double uses 64 bits to represent data. Decimal uses 128 bits to represent data.

Is decimal slower than float?

Decimal used within financial applications that require a high degree of accuracy and easy to avoid rounding errors whereas Float used when you stores scientific numbers and for better performance. Performance of Decimals is slower than and float data types.

What is the difference between float double and decimal?

A float has 7 decimal digits of precision and occupies 32 bits . A double is a 64-bit IEEE 754 double-precision floating-point number. 1 bit for the sign, 11 bits for the exponent, and 52 bits for the value. A double has 15 decimal digits of precision and occupies a total of 64 bits .


1 Answers

  1. Floating point arithmetic will almost always be significantly faster because it is supported directly by the hardware. So far almost no widely used hardware supports decimal arithmetic (although this is changing, see comments).
  2. Financial applications should always use decimal numbers, the number of horror stories stemming from using floating point in financial applications is endless, you should be able to find many such examples with a Google search.
  3. While decimal arithmetic may be significantly slower than floating point arithmetic, unless you are spending a significant amount of time processing decimal data the impact on your program is likely to be negligible. As always, do the appropriate profiling before you start worrying about the difference.
like image 138
Robert Gamble Avatar answered Oct 13 '22 23:10

Robert Gamble