I am doing an exercise of a programming book A Book on C . The exercise suggests that to find average of a group of numbers, algorithm:
avg += (x - avg) / i;
is better than:
sum += x;
avg = sum / i;
'x' is a variable used to store the input numbers. It also suggests beside preventing overflow, the first algorithm do have some other benefits than the second algorthim, can anyone help me? Thanks!
I'm assuming we're talking about floating-point arithmetic here (otherwise the "better" average will be terrible).
In the second method, the intermediate result (sum
) will tend to grow without bound, which means you'll eventually lose low-end precision. In the first method, the intermediate result should stay of a roughly similar magnitude to your input data (assuming your input doesn't have an enormous dynamic range). which means that it will retain precision better.
However, I can imagine that as i
gets bigger and bigger, the value of (x - avg) / i
will get less and less accurate (relatively). So it also has its disadvantages.
It is better in the sense that it computes a running average, i.e. you don't need to have all your numbers in advance. You can calculate that as you go, or as numbers become available.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With