Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Handling money value, is it safe to divide a number by 100?

In the repository code, in a module developed by another team, I discovered that there is a conversion of a price from cents to euro, just dividing the number by 100.

The code is in Javascript, so it uses the IEEE 754 standard.

I know that is not safe handling money values as floating-point numbers, but I was wondering if this case is safe before sending the task to the other team.

So far, I didn't find any case where dividing an integer by 100 gets an inaccurate result. Let's go further: 100 is just 2*2*5*5.

We know that dividing a number by 2 is safe, since it is just equal to a shift of a position.

So we can easily say that, if exists a number that is not accurately divisible by 5, then the division by 100 is not accurate.

I did many tests and I didn't find any of these numbers, but I'm far from a theoretical demonstration of the thesis.

So, is dividing a number by 100 safe in the IEEE 754 standard?

like image 682
Cristian Traìna Avatar asked Mar 12 '19 17:03

Cristian Traìna


2 Answers

A floating point decimal number with 15 significant digits of precision converts to a 64-bit binary floating point number (Number in JavaScript) and back to decimal without loss of precision. Although the binary number may not store the decimal number exactly, it has more bits of precision (minimum 17 decimal significant digits are required to represent a 53-bit mantissa) and converts with rounding back to the original decimal exactly. These extra binary digits of mantissa are there precisely to keep those 15 significant decimal digits exact in all results of CPU arithmetic. See Number of Digits Required For Round-Trip Conversions for full details.

When you divide by 100 the binary result still has 53-bit of precision with a possible error in the unit of least precision (the lowest bit of mantissa) unless the result underflows to 0 (see What Every Computer Scientist Should Know About Floating-Point Arithmetic for full details.). That binary number still converts with rounding to a correct exact decimal number within 15 significant decimal digits of precision.

In other words, if your decimal numbers have no more that 15 significant digits then dividing them by 100 keeps that precision.

E.g. try 123456789012345 / 100 and 0.000123456789012345 / 100 in your browser console (both these numbers have 15 significant decimal digits of precision) - these divisions return correct decimal numbers within 15 significant decimal digits:

123456789012345 / 100
1234567890123.45

0.000123456789012345 / 100
0.00000123456789012345
like image 57
Maxim Egorushkin Avatar answered Sep 21 '22 08:09

Maxim Egorushkin


If x is a 15-decimal-digit integer, then converting x to a JavaScript Number, dividing by 100, and converting the result to a numeral with 15 significant decimal digits produces exactly x/100. A proof follows.

Notes:

  • Converting the result of the division to a number with 15 significant decimal digits yields exactly x/100. The actual result of the division, while it is in the Number format, generally will not be exactly x/100. For example, 73/100 yields 0.729999999999999982236431605997495353221893310546875.
  • Converting the result of the division to more than 15 significant decimal digits will also not generally yield x/100, as the extra digits may reveal the difference, as shown for .73 above. (And, of course, using fewer digits may be insufficient to represent x/100.) Thus, if it is desired to communicate exactly x/100 to another process, it must be done with exactly 15 significant decimal digits (or some other mitigation for error).
  • The proof below applies to 15-digit integers x, not to other 15-significant-decimal-digit numbers (such as numerals with 15 decimal digits followed by one or more zeros or numerals starting with a decimal point followed by some zeros followed by 15 significant digits).

Preliminaries

JavaScript is an implementation of ECMAScript, specified in Ecma-262 and ISO/IEC 16262. In clause 6.1.6, Ecma-262 specifies that the IEEE-754 basic 64-bit binary floating-point format is used for ECMAScript’s Number type, except that only a single NaN is used. Clause 6.1.6 further describes the arithmetic used, which is essentially IEEE-754 arithmetic with rounding-to-nearest, ties-to-even.

The IEEE-754 basic 64-bit binary floating-point format uses a 53-bit significand.

The Unit of Least Precision (ULP) of a binary floating-point number is the value attributed to the position of the least significant bit in its significand. (Thus, the ULP scales with the exponent.) Measured in ULP, all normal 53-bit significands are in [252 ULP, 253 ULP).

For a 15-significant-digit decimal number, its ULP herein will be the value attributed to the position of the 15th digit position counting down from the leading significant digit.

Lemma

First, we establish the well-known fact that converting a 15-significant-decimal-digit number to Number and back to 15 significant decimal digits yields the original number, provided the number is within the normal range of the Number format.

If x is a number of 15 significant decimal digits (not necessarily an integer) within the normal range of the floating-point format (2−1022 ≤ |x| < 21024), then converting x to the nearest value representable in the floating-point format and then converting the result to 15 significant decimal digits produces exactly x, when both conversions are performed with rounding-to-nearest, ties-to-even. To see this, let y be the result of the first conversion. If y differs from x by less than ½ of ULP of x, then x is the 15-significant-digit number nearest y and hence must be the result of the second conversion.

In the first conversion, the result y is at most ½ ULP from x, due to the rounding rule. This is a relative accuracy of at most ½ / 252 (that is, the potential ½ ULP error divided by the least the significand can be, measured in ULP). Thus, y differs from x by at most one part in 253. In the worst case, the digits of x may be 9999999999999 = 1015−1, so the error relative to ULP of x would be (1015−1)/253, which is about .111 times the ULP of x. Thus, y always differs from x by less than ½ of its ULP, so converting y back to 15 significant decimal digits yields x.

Proof

If x is a 15-decimal-digit integer, it is exactly representable in the Number format, since the Number format has 53 bits in its significand and is therefore capable of exactly representing all integers up to 253, which is about 9.007e15, which is more than 1015.

Thus, converting x to Number yields exactly x with no error.

Then, by the rules for rounding arithmetic results, dividing x by 100 yields the representable number closest to x/100. Call this y. Now note that x/100 is a number representable with 15 significant decimal digits. (It could be written in scientific notation as x•10−2 or in source code as the digits of x suffixed by e-2.) Note that converting x/100 to Number also yields y, since the conversion yields, just as the division does, the number exactly representable in the Number format that is closest to x/100. By the lemma, the result of converting x/100 to Number and back to a 15-significant-decimal-digit number yields x/100, and so the result of converting x to Number, then dividing by 100, then converting to 15 significant decimal digits also yields x/100.

like image 32
Eric Postpischil Avatar answered Sep 22 '22 08:09

Eric Postpischil