Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to decide what to use - double or decimal? [duplicate]

Tags:

c#

.net

Possible Duplicate:
decimal vs double! - Which one should I use and when?

I'm using double type for price in my trading software. I've noticed that sometimes there are a odd errors. They occur if price contains 4 digits after "dot", like 2.1234.

When I sent from my program "2.1234" on the market order appears at the price of "2.1235".

I don't use decimal because I don't need "extreme" precision. I don't need to distinguish for examle "2.00000000003" from "2.00000000002". I need maximum 6 digits after a dot.

The question is - where is the line? When to use decimal?

Should I use decimal for any finansical operations? Even if I need just one digit after the dot? (1.1 1.2 etc.)

I know decimal is pretty slow so I would prefer to use double unless decimal is absolutely required.

like image 688
Oleg Vazhnev Avatar asked Jun 14 '11 09:06

Oleg Vazhnev


People also ask

Should I use double or decimal?

Use double for non-integer math where the most precise answer isn't necessary. Use decimal for non-integer math where precision is needed (e.g. money and currency). Use int by default for any integer-based operations that can use that type, as it will be more performant than short or long .

What is the difference between decimal and double?

Double uses 64 bits to represent data. Decimal uses 128 bits to represent data.

What is the difference between float double and decimal?

A float has 7 decimal digits of precision and occupies 32 bits . A double is a 64-bit IEEE 754 double-precision floating-point number. 1 bit for the sign, 11 bits for the exponent, and 52 bits for the value. A double has 15 decimal digits of precision and occupies a total of 64 bits .

Should I use double or float C#?

Use float or double ? The precision of a floating point value indicates how many digits the value can have after the decimal point. The precision of float is only six or seven decimal digits, while double variables have a precision of about 15 digits. Therefore it is safer to use double for most calculations.


2 Answers

Use decimal whenever you're dealing with quantities that you want to (and can) be represented exactly in base-10. That includes monetary values, because you want 2.1234 to be represented exactly as 2.1234.

Use double when you don't need an exact representation in base-10. This is usually good for handling measurements, because those are already approximations, not exact quantities.

Of course, if having or not an exact representation in base-10 is not important to you, other factors come into consideration, which may or may not matter depending on the specific situation:

  • double has a larger range (it can handle very large and very small magnitudes);
  • decimal has more precision (has more significant digits);
  • you may need to use double to interact with some older APIs that are not aware of decimal;
  • double is faster than decimal;
  • decimal has a larger memory footprint;
like image 83
R. Martinho Fernandes Avatar answered Oct 17 '22 04:10

R. Martinho Fernandes


When accuracy is needed and important, use decimal.

When accuracy is not that important, then you can use double.

In your case, you should be using decimal, as its financial matter.

like image 29
Nawaz Avatar answered Oct 17 '22 04:10

Nawaz