Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

In which order should floats be added to get the most precise result?

This was a question I was asked at my recent interview and I want to know (I don't actually remember the theory of the numerical analysis, so please help me :)

If we have some function, which accumulates floating-point numbers:

std::accumulate(v.begin(), v.end(), 0.0); 

v is a std::vector<float>, for example.

  • Would it be better to sort these numbers before accumulating them?

  • Which order would give the most precise answer?

I suspect that sorting the numbers in ascending order would actually make the numerical error less, but unfortunately I can't prove it myself.

P.S. I do realize this probably has nothing to do with real world programming, just being curious.

like image 353
Yippie-Ki-Yay Avatar asked Jul 14 '11 19:07

Yippie-Ki-Yay


People also ask

How do you make a float accurate?

2223004 in float or double , divide 0 by 4, add 0, divide by 4, add 3, divide by 4, add 2, divide by 4, add 2, divide by 4, add 2, and divide by 4. The result will be exact, and you can then do arithmetic natively in float or double .

Are floating point numbers precise?

A single-precision, floating-point number is a 32-bit approximation of a real number. The number can be zero or can range from -3.40282347E+38 to -1.17549435E-38, or from 1.17549435E-38 to 3.40282347E+38.


2 Answers

Your instinct is basically right, sorting in ascending order (of magnitude) usually improves things somewhat. Consider the case where we're adding single-precision (32 bit) floats, and there are 1 billion values equal to 1 / (1 billion), and one value equal to 1. If the 1 comes first, then the sum will come to 1, since 1 + (1 / 1 billion) is 1 due to loss of precision. Each addition has no effect at all on the total.

If the small values come first, they will at least sum to something, although even then I have 2^30 of them, whereas after 2^25 or so I'm back in the situation where each one individually isn't affecting the total any more. So I'm still going to need more tricks.

That's an extreme case, but in general adding two values of similar magnitude is more accurate than adding two values of very different magnitudes, since you "discard" fewer bits of precision in the smaller value that way. By sorting the numbers, you group values of similar magnitude together, and by adding them in ascending order you give the small values a "chance" of cumulatively reaching the magnitude of the bigger numbers.

Still, if negative numbers are involved it's easy to "outwit" this approach. Consider three values to sum, {1, -1, 1 billionth}. The arithmetically correct sum is 1 billionth, but if my first addition involves the tiny value then my final sum will be 0. Of the 6 possible orders, only 2 are "correct" - {1, -1, 1 billionth} and {-1, 1, 1 billionth}. All 6 orders give results that are accurate at the scale of the largest-magnitude value in the input (0.0000001% out), but for 4 of them the result is inaccurate at the scale of the true solution (100% out). The particular problem you're solving will tell you whether the former is good enough or not.

In fact, you can play a lot more tricks than just adding them in sorted order. If you have lots of very small values, a middle number of middling values, and a small number of large values, then it might be most accurate to first add up all the small ones, then separately total the middling ones, add those two totals together then add the large ones. It's not at all trivial to find the most accurate combination of floating-point additions, but to cope with really bad cases you can keep a whole array of running totals at different magnitudes, add each new value to the total that best matches its magnitude, and when a running total starts to get too big for its magnitude, add it into the next total up and start a new one. Taken to its logical extreme, this process is equivalent to performing the sum in an arbitrary-precision type (so you'd do that). But given the simplistic choice of adding in ascending or descending order of magnitude, ascending is the better bet.

It does have some relation to real-world programming, since there are some cases where your calculation can go very badly wrong if you accidentally chop off a "heavy" tail consisting of a large number of values each of which is too small to individually affect the sum, or if you throw away too much precision from a lot of small values that individually only affect the last few bits of the sum. In cases where the tail is negligible anyway you probably don't care. For example if you're only adding together a small number of values in the first place and you're only using a few significant figures of the sum.

like image 192
Steve Jessop Avatar answered Sep 21 '22 09:09

Steve Jessop


There is also an algorithm designed for this kind of accumulation operation, called Kahan Summation, that you should probably be aware of.

According to Wikipedia,

The Kahan summation algorithm (also known as compensated summation) significantly reduces the numerical error in the total obtained by adding a sequence of finite precision floating point numbers, compared to the obvious approach. This is done by keeping a separate running compensation (a variable to accumulate small errors).

In pseudocode, the algorithm is:

function kahanSum(input)  var sum = input[1]  var c = 0.0          //A running compensation for lost low-order bits.  for i = 2 to input.length   y = input[i] - c    //So far, so good: c is zero.   t = sum + y         //Alas, sum is big, y small, so low-order digits of y are lost.   c = (t - sum) - y   //(t - sum) recovers the high-order part of y; subtracting y recovers -(low part of y)   sum = t             //Algebraically, c should always be zero. Beware eagerly optimising compilers!  next i               //Next time around, the lost low part will be added to y in a fresh attempt. return sum 
like image 21
Daniel Pryden Avatar answered Sep 21 '22 09:09

Daniel Pryden