Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Best algorithm for avoiding loss of precision?

A recent homework assignment I have received asks us to take expressions which could create a loss of precision when performed in the computer, and alter them so that this loss is avoided.

Unfortunately, the directions for doing this haven't been made very clear. From watching various examples being performed, I know that there are certain methods of doing this: using Taylor series, using conjugates if square roots are involved, or finding a common denominator when two fractions are being subtracted.

However, I'm having some trouble noticing exactly when loss of precision is going to occur. So far the only thing I know for certain is that when you subtract two numbers that are close to being the same, loss of precision occurs since high order digits are significant, and you lose those from round off.

My question is what are some other common situations I should be looking for, and what are considered 'good' methods of approaching them?

For example, here is one problem:

f(x) = tan(x) − sin(x)  when x ~ 0

What is the best and worst algorithm for evaluating this out of these three choices:

(a) (1/ cos(x) − 1) sin(x),
(b) (x^3)/2
(c) tan(x)*(sin(x)^2)/(cos(x) + 1).

I understand that when x is close to zero, tan(x) and sin(x) are nearly the same. I don't understand how or why any of these algorithms are better or worse for solving the problem.

like image 618
Zachary Wright Avatar asked Feb 02 '09 03:02

Zachary Wright


3 Answers

Another rule of thumb usually used is this: When adding a long series of numbers, start adding from numbers closest to zero and end with the biggest numbers.

Explaining why this is good is abit tricky. when you're adding small numbers to a large numbers, there is a chance they will be completely discarded because they are smaller than then lowest digit in the current mantissa of a large number. take for instance this situation:

a = 1,000,000;
do 100,000,000 time:
   a += 0.01;

if 0.01 is smaller than the lowest mantissa digit, then the loop does nothing and the end result is a == 1,000,000 but if you do this like this:

a = 0;
do 100,000,000 time:
   a += 0.01;
a += 1,000,000;

Than the low number slowly grow and you're more likely to end up with something close to a == 2,000,000 which is the right answer.
This is ofcourse an extreme example but I hope you get the idea.

like image 190
shoosh Avatar answered Sep 27 '22 16:09

shoosh


I had to take a numerics class back when I was an undergrad, and it was thoroughly painful. Anyhow, IEEE 754 is the floating point standard typically implemented by modern CPUs. It's useful to understand the basics of it, as this gives you a lot of intuition about what not to do. The simplified explanation of it is that computers store floating point numbers in something like base-2 scientific notation with a fixed number of digits (bits) for the exponent and for the mantissa. This means that the larger the absolute value of a number, the less precisely it can be represented. For 32-bit floats in IEEE 754, half of the possible bit patterns represent between -1 and 1, even though numbers up to about 10^38 are representable with a 32-bit float. For values larger than 2^24 (approximately 16.7 million) a 32-bit float cannot represent all integers exactly.

What this means for you is that you generally want to avoid the following:

  1. Having intermediate values be large when the final answer is expected to be small.
  2. Adding/subtracting small numbers to/from large numbers. For example, if you wrote something like:

    for(float index = 17000000; index < 17000001; index++) {}

This loop would never terminate becuase 17,000,000 + 1 is rounded down to 17,000,000. If you had something like:

float foo = 10000000 - 10000000.0001

The value for foo would be 0, not -0.0001, due to rounding error.

like image 29
dsimcha Avatar answered Sep 27 '22 16:09

dsimcha


My question is what are some other common situations I should be looking for, and what are considered 'good' methods of approaching them?

There are several ways you can have severe or even catastrophic loss of precision.

The most important reason is that floating-point numbers have a limited number of digits, e.g..doubles have 53 bits. That means if you have "useless" digits which are not part of the solution but must be stored, you lose precision.

For example (We are using decimal types for demonstration):

2.598765000000000000000000000100 -

2.598765000000000000000000000099

The interesting part is the 100-99 = 1 answer. As 2.598765 is equal in both cases, it does not change the result, but waste 8 digits. Much worse, because the computer doesn't know that the digits is useless, it is forced to store it and crams 21 zeroes after it, wasting at all 29 digits. Unfortunately there is no way to circumvent it for differences, but there are other cases, e.g. exp(x)-1 which is a function occuring very often in physics.

The exp function near 0 is almost linear, but it enforces a 1 as leading digit. So with 12 significant digits exp(0.001)-1 = 1.00100050017 - 1 = 1.00050017e-3

If we use instead a function expm1(), use the taylor series:

1 + x +x^2/2 +x^3/6 ... -1 =

x +x^2/2 +x^3/6 =: expm1(x)

expm1(0.001) = 1.00500166667e-3

Much better.

The second problem are functions with a very steep slope like tangent of x near pi/2. tan(11) has a slope of 50000 which means that any small deviation caused by rounding errors before will be amplified by the factor 50000 ! Or you have singularities if e.g. the result approaches 0/0, that means it can have any value.

In both cases you create a substitute function, simplying the original function. It is of no use to highlight the different solution approaches because without training you will simply not "see" the problem in the first place.

A very good book to learn and train: Forman S. Acton: Real Computing made real

like image 30
Thorsten S. Avatar answered Sep 27 '22 17:09

Thorsten S.