Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is floating-point addition and multiplication associative?

I had a problem when I was adding three floating point values and comparing them to 1.

cout << ((0.7 + 0.2 + 0.1)==1)<<endl;     //output is 0
cout << ((0.7 + 0.1 + 0.2)==1)<<endl;     //output is 1

Why would these values come out different?

like image 357
Karen Tsirunyan Avatar asked Apr 29 '12 11:04

Karen Tsirunyan


People also ask

Why is float addition not associative?

With floating-point operands, it is commutative but not associative. Thus, if you have two expressions that produce different results, you cannot form one from the other by applying only commutativity. @tmyklebu OK, so this does check associativity if and only if it is known that commutativity holds.

Is floating-point addition commutative?

The mathematical operation "sum" is commutative. However, the mathematical operation "sum" is also associative, whereas floating point addition is definitely not associative.

How do you know if a operation is associative?

To check associativity, we must check every possible instance of the equation (x*y)*z = x*(y*z). That means we must think of every possible combination of what x, y, and z could be. Does (x*y)*z = x*(y*z)? a = a, So (a*a)*a = a*(a*a).

What is the associative property of addition and multiplication?

The associative property states that when adding or multiplying, the grouping symbols can be relocated without affecting the result. The formula for addition states (a+b)+c=a+(b+c) and the formula for multiplication states (a×b)×c=a×(b×c).


2 Answers

Floating point addition is not necessarily associative. If you change the order in which you add things up, this can change the result.

The standard paper on the subject is What Every Computer Scientist Should Know about Floating Point Arithmetic. It gives the following example:

Another grey area concerns the interpretation of parentheses. Due to roundoff errors, the associative laws of algebra do not necessarily hold for floating-point numbers. For example, the expression (x+y)+z has a totally different answer than x+(y+z) when x = 1e30, y = -1e30 and z = 1 (it is 1 in the former case, 0 in the latter).

like image 67
NPE Avatar answered Oct 18 '22 00:10

NPE


What is likely, with currently popular machines and software, is:

The compiler encoded .7 as 0x1.6666666666666p-1 (this is the hexadecimal numeral 1.6666666666666 multiplied by 2 to the power of -1), .2 as 0x1.999999999999ap-3, and .1 as 0x1.999999999999ap-4. Each of these is the number representable in floating-point that is closest to the decimal numeral you wrote.

Observe that each of these hexadecimal floating-point constants has exactly 53 bits in its significand (the "fraction" part, often inaccurately called the mantissa). The hexadecimal numeral for the significand has a "1" and thirteen more hexadecimal digits (four bits each, 52 total, 53 including the "1"), which is what the IEEE-754 standard provides for, for 64-bit binary floating-point numbers.

Let's add the numbers for .7 and .2: 0x1.6666666666666p-1 and 0x1.999999999999ap-3. First, scale the exponent of the second number to match the first. To do this, we will multiply the exponent by 4 (changing "p-3" to "p-1") and multiply the significand by 1/4, giving 0x0.66666666666668p-1. Then add 0x1.6666666666666p-1 and 0x0.66666666666668p-1, giving 0x1.ccccccccccccc8p-1. Note that this number has more than 53 bits in the significand: The "8" is the 14th digit after the period. Floating-point cannot return a result with this many bits, so it has to be rounded to the nearest representable number. In this case, there are two numbers that are equally near, 0x1.cccccccccccccp-1 and 0x1.ccccccccccccdp-1. When there is a tie, the number with a zero in the lowest bit of the significand is used. "c" is even and "d" is odd, so "c" is used. The final result of the addition is 0x1.cccccccccccccp-1.

Next, add the number for .1 (0x1.999999999999ap-4) to that. Again, we scale to make the exponents match, so 0x1.999999999999ap-4 becomes 0x.33333333333334p-1. Then add that to 0x1.cccccccccccccp-1, giving 0x1.fffffffffffff4p-1. Rounding that to 53 bits gives 0x1.fffffffffffffp-1, and that is the final result of .7+.2+.1.

Now consider .7+.1+.2. For .7+.1, add 0x1.6666666666666p-1 and 0x1.999999999999ap-4. Recall the latter is scaled to 0x.33333333333334p-1. Then the exact sum is 0x1.99999999999994p-1. Rounding that to 53 bits gives 0x1.9999999999999p-1.

Then add the number for .2 (0x1.999999999999ap-3), which is scaled to 0x0.66666666666668p-1. The exact sum is 0x2.00000000000008p-1. Floating-point significands are always scaled to start with 1 (except for special cases: zero, infinity, and very small numbers at the bottom of the representable range), so we adjust this to 0x1.00000000000004p0. Finally, we round to 53 bits, giving 0x1.0000000000000p0.

Thus, because of errors that occur when rounding, .7+.2+.1 returns 0x1.fffffffffffffp-1 (very slightly less than 1), and .7+.1+.2 returns 0x1.0000000000000p0 (exactly 1).

like image 26
Eric Postpischil Avatar answered Oct 17 '22 22:10

Eric Postpischil