Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Use float or decimal for accounting application dollar amount?

We are rewriting our legacy accounting system in VB.NET and SQL Server. We brought in a new team of .NET/ SQL Programmers to do the rewrite. Most of the system is already completed with the dollar amounts using floats. The legacy system language, I programmed in, did not have a float, so I probably would have used a decimal.

What is your recommendation?

Should the float or decimal data type be used for dollar amounts?

What are some of the pros and cons for either?

One con mentioned in our daily scrum was you have to be careful when you calculate an amount that returns a result that is over two decimal positions. It sounds like you will have to round the amount to two decimal positions.

Another con is all displays and printed amounts have to have a format statement that shows two decimal positions. I noticed a few times where this was not done and the amounts did not look correct. (i.e. 10.2 or 10.2546)

A pro is the float-only approach takes up eight bytes on disk where the decimal would take up nine bytes (decimal 12,2).

like image 921
Gerhard Weiss Avatar asked Sep 15 '08 04:09

Gerhard Weiss


People also ask

Should I use float or double for currency?

Float & Double are bad for financial (even for military use) world, never use them for monetary calculations. If precision is one of your requirements, use BigDecimal instead.

Why should you not use float for money?

When doing any kind of calculation with currency, accuracy is extremely important. And floating point numbers (floats and doubles) don't have an accurate enough representation to prevent rounding errors from accumulating when doing arithmetic with monetary values.

Is decimal better than float?

Use decimals when precision matters, such as with financial calculations. Decimals can suffer from their own precision issues, but generally, decimals are more precise than floats.

What is the difference between a decimal and a float?

Float stores an approximate value and decimal stores an exact value. In summary, exact values like money should use decimal, and approximate values like scientific measurements should use float. When multiplying a non integer and dividing by that same number, decimals lose precision while floats do not.


3 Answers

Should Float or Decimal data type be used for dollar amounts?

The answer is easy. Never floats. NEVER!

Floats were according to IEEE 754 always binary, only the new standard IEEE 754R defined decimal formats. Many of the fractional binary parts can never equal the exact decimal representation.

Any binary number can be written as m/2^n (m, n positive integers), any decimal number as m/(2^n*5^n). As binaries lack the prime factor 5, all binary numbers can be exactly represented by decimals, but not vice versa.

0.3 = 3/(2^1 * 5^1) = 0.3

0.3 = [0.25/0.5] [0.25/0.375] [0.25/3.125] [0.2825/3.125]

          1/4         1/8         1/16          1/32

So you end up with a number either higher or lower than the given decimal number. Always.

Why does that matter? Rounding.

Normal rounding means 0..4 down, 5..9 up. So it does matter if the result is either 0.049999999999.... or 0.0500000000... You may know that it means 5 cent, but the the computer does not know that and rounds 0.4999... down (wrong) and 0.5000... up (right).

Given that the result of floating point computations always contain small error terms, the decision is pure luck. It gets hopeless if you want decimal round-to-even handling with binary numbers.

Unconvinced? You insist that in your account system everything is perfectly ok? Assets and liabilities equal? Ok, then take each of the given formatted numbers of each entry, parse them and sum them with an independent decimal system!

Compare that with the formatted sum. Oops, there is something wrong, isn't it?

For that calculation, extreme accuracy and fidelity was required (we used Oracle's FLOAT) so we could record the "billionth's of a penny" being accured.

It doesn't help against this error. Because all people automatically assume that the computer sums right, and practically no one checks independently.

like image 107
TSK Avatar answered Oct 21 '22 02:10

TSK


This photo answers:

photo1,C.O.

This is another situation: man from Northampton got a letter stating his home would be seized if he didn't pay up zero dollars and zero cents!

photo2,C.O.

like image 33
Nakilon Avatar answered Oct 21 '22 00:10

Nakilon


First you should read What Every Computer Scientist Should Know About Floating Point Arithmetic. Then you should really consider using some type of fixed point / arbitrary-precision number package (e.g., Java BigNum or Python decimal module). Otherwise, you'll be in for a world of hurt. Then figure out if using the native SQL decimal type is enough.

Floats and doubles exist(ed) to expose the fast x87 floating-point coprocessor that is now pretty much obsolete. Don't use them if you care about the accuracy of the computations and/or don't fully compensate for their limitations.

like image 43
Rich Schuler Avatar answered Oct 21 '22 01:10

Rich Schuler