Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why don't applications typically use int to internally represent currency values?

Tags:

types

currency

Why don't applications typically use an integer datatype (such as int or long in C++/Java/C#) to represent currency values internally, as opposed to using a floating-point datatype (float, double) or something like Java's BigDecimal?

For example, if I'm writing a Java application and I have a variable that I want to represent an actual value in U.S. dollars (no need to represent fractions of pennies), I could declare an int value that represents the number of cents. For example, a value of "$1.00" would be represented as 100. This seems like a good alternative to using a double (see question Why not use Double or Float to represent currency?) or a BigDecimal (which is a more heavyweight object than a simple primitive int).

Obviously, the integer value would need to be "translated" (i.e. from 100 to "$1" or "$1.00") before displaying it to a user, or upon user input of a currency value, but this doing this doesn't seem significantly more burdensome than formatting a double or a BigDecimal for display.

Why isn't this approach a best practice among applications that don't need to represent fractions of cents (or the equivalent in other currency types)?

like image 735
Jon Schneider Avatar asked Mar 18 '11 18:03

Jon Schneider


2 Answers

Integer types

It's a bad idea to use most integer data types to represent currencies, because of:

  • a very limited representable value range in respect to common applications;
  • imposing extra burden on handling fractional values.

Specifically, the limited value range may be a serious problem with shorter integer type. Lets consider a common 32-bit signed integer (usually a int):

  • the value range is from approx. -2.15 billion to +2.15 billion — this itself is not an option for just any accounting / banking / serious finance use;
  • when just two last digits are used to represent the fractional part, the range shrinks to -21.5 million to +21.5 million;
  • in case multiplication shall work (not speaking about mixed-precision calculations), the range is scaled down even more.

With a 64-bit signed integer (usually a long) you can count up to 92 thousand trillion. When thinking about the global economy, money is being counted in trillions — hence this is neither a reasonable option.

Floating-point types

It's a bad idea to use floating-point data types, because they are imprecise in nature, which is fatal problem for the vast majority of monetary calculations.

Suitable data types

It's a very good idea to use fixed-point or decimal data types, because they usually don't have the negative properties as floating-point and integer data types:

  • representable value range is broad enough;
  • precision can be adjusted by rounding in respect to calculation requirements;
  • no confusion thanks to natural fractional values handling;
  • precise decimal number representation.

Last but not least, the suitable data type heavily depends on the language and its capabilities.

Other problems

Also, in many calculation scenarios it is necessary to use different precision for intermediate calculations and for the resulting values. While results usually have to be represented with the precision defined for the particular currency by the respective law, intermediate calculations may involve higher-precision intermediate results. Examples are percentage calculations in loan payments, insurance costs, etc., or currency conversions where exchange rates are frequently given in higher precision.

Multi-currency softwares also need to deal with the fact that different currencies have different lawful precisions. Rounding may be imposed by accounting standards.

like image 27
Ondrej Tucny Avatar answered Oct 08 '22 11:10

Ondrej Tucny


Why don't applications typically use [whole numbers] to internally represent currency values?

  1. It does not make for simple coding. $1.10 translates to 110¢. Okay, but what about when you need to calculate tax (i.e $1.10 * 4.225% - Missouri's tax rate which results in $0.046475). To keep all money in whole numbers you'd have to also convert the sales tax to a whole number (4225), which would require converting 110¢ further to 11000000. The math then can be 11000000 * 4225 / 100000 = 464750. This is a problem as now we have values in fractions of cents (11000000 and 464750 respectively). All this for the sake of storing money as whole numbers.

  2. Therefore, it's easier to think and code in terms of the native currency. In the United States, this would be in dollars with the cents being a decimal fraction (i.e. $1.10). Coding such in terms of 110¢ isn't as natural. Using base-10 floating point numbers (such as Java's BigDecimal, and .NET's Decimal) are usually precise enough for currency values (compared to base-2 floating point numbers like Float and Double).

Why isn't this approach a best practice among applications that don't need to represent fractions of cents (or the equivalent in other currency types)?

I think number 1 above shows that it's hard to get away from needing to represent fractions of cents, at least when it comes to calculating sales tax - something common in business applications.

like image 124
Matt Avatar answered Oct 08 '22 13:10

Matt