Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Floating point arithmetic not producing exact results [duplicate]

People also ask

Why is floating point arithmetic inaccurate?

Because often-times, they are approximating rationals that cannot be represented finitely in base 2 (the digits repeat), and in general they are approximating real (possibly irrational) numbers which may not be representable in finitely many digits in any base.

What are the difficulties faced when we use floating point arithmetic?

In addition to roundoff error inherent when using floating point arithmetic, there are some other types of approximation errors that commonly arise in scientific applications. Measurement error. The data values used in the computation are not accurate.

Why is floating arithmetic not exact Python?

Floating point numbers behave differently from what you are used to in all programming languages, not just in Python. This is because some decimal numbers don't have an exact representation in a binary system.

Is floating-point multiplication exact?

Floating-point addition, subtraction, and multiplication of integral values will be exact as long as the inputs are exact and the results are small enough.


If you need exact decimal values, you should use java.math.BigDecimal. Then read "What Every Computer Scientist Should Know About Floating-Point Arithmetic" for the background of why you're getting those results.

(I have a .NET-centric article which you may find easier to read - and certainly shorter. The differences between Java and .NET are mostly irrelevant for the purposes of understanding this issue.)


Floating point numbers use binary fractions and not decimal fractions. That is, you're used to decimal fractions made up of a tenths digit, a hundredths digit, a thousandths digit, etc. d1/10 + d2/100 + d3/1000 ... But floating point numbers are in binary, so they have a half digit, a quarter digit, an eighth digit, etc. d1/2 + d2/4 + d3/8 ...

Many decimal fractions cannot be expressed exactly in any finite number of binary digits. For example, 1/2 is no problem: in decimal it's .5, in binary it's .1. 3/4 is decimal .75, binary .11. But 1/10 is a clean .1 in decimal, but in binary it's .0001100110011... with the "0011" repeating forever. As the computer can store only a finite number of digits, at some point this has to get chopped off, so the answer is not precise. When we convert back to decimal on output, we get a strange-looking number.

As Jon Skeet says, if you need exact decimal fractions, use BigDecimal. If performance is an issue, you could roll your own decimal fractions. Like, if you know you always want exactly 3 decimal places and that the numbers will not be more than a million or so, you could simply use int's with an assumed 3 decimal places, making adjustments as necessary when you do arithmetic and writing an output format function to insert the decimal point in the right place. But 99% of the time performance isn't a big enough issue to be worth the trouble.


Floating-point numbers are imprecise, especially since they work in binary fractions (1/2, 1/4, 1/8, 1/16, 1/32, ...) instead of decimal fractions (1/10, 1/100, 1/1000, ...). Just define what you feel is "close enough" and use something like Math.abs(a-b) < 0.000001.


On a philosophical note, I wonder: Most computer CPUs today have built-in support for integer arithmetic and floating-point arithmetic, but no support for decimal arithmetic. Why not? I haven't written an application in years where floats were useable because of this rounding problem. You certainly can't use them for money amounts: No one wants to print a price on a sales receipt of "$42.3200003". No accountant is going to accept "we might be off by a penny here and there because we're using binary fractions and had rounding errors".

Floats are fine for measurements, like distance or temperature, where there's no such thing as an "exact answer" and you have to round off to the precision of your instruments at some point anyway. I suppose for people who are programming the computer in the chemistry lab, floats are used routinely. But for those of us in the business world, they're pretty much useless.

Back in those ancient days when I programmed on mainframes, the IBM 360 family of CPUs had built-in support for packed decimal arithmetic. They stored strings where each byte held two decimal digits, i.e. the first four bits had values from 0 to 9 and ditto the second four bits, and the CPU had arithmetic functions to manipulate them. Why can't Intel do something like that? Then Java could add a "decimal" data type and we wouldn't need all the extra junk.

I'm not saying to abolish floats, of course. Just add decimals.

Oh well, as great social movements go, I don't suppose this is one that is going to generate a lot of popular excitement or rioting in the streets.


You can make the output of your program look more like you expect by using formatted output.

http://java.sun.com/javase/6/docs/api/java/util/Formatter.html

Obviously the underlying floating point arithmetic still works the same, but at least the output will be more readable.

For example, to round your results to two decimal places:

System.out.print(String.format(".2f", i) + ",");