Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Computing floating point accuracy (K&R 2-1)

I found Stevens Computing Services – K & R Exercise 2-1 a very thorough answer to K&R 2-1. This slice of the full code computes the maximum value of a float type in the C programming language.

Unluckily my theoretical comprehension of float values is quite limited. I know they are composed of significand (mantissa.. ) and a magnitude which is a power of 2.

#include <stdio.h>
#include <limits.h>
#include <float.h>

main()
{
    float flt_a, flt_b, flt_c, flt_r;

    /* FLOAT */
    printf("\nFLOAT MAX\n");
    printf("<limits.h> %E  ", FLT_MAX);

    flt_a = 2.0;
    flt_b = 1.0;
    while (flt_a != flt_b) {
        flt_m = flt_b;           /* MAX POWER OF 2 IN MANTISSA */     
        flt_a = flt_b = flt_b * 2.0;
        flt_a = flt_a + 1.0;
    }
    flt_m = flt_m + (flt_m - 1); /* MAX VALUE OF MANTISSA */

    flt_a = flt_b = flt_c = flt_m;
    while (flt_b == flt_c) {
        flt_c = flt_a;        
        flt_a = flt_a * 2.0;
        flt_b = flt_a / 2.0;
    }
    printf("COMPUTED %E\n", flt_c);
}

I understand that the latter part basically checks to which power of 2 it's possible to raise the significand with a three variable algorithm. What about the first part?

I can see that a progression of multiples of 2 should eventually determine the value of the significand, but I tried to trace a few small numbers to check how it should work and it failed to find the right values...

======================================================================

What are the concepts on which this program is based upon and does this program gets more precise as longer and non-integer numbers have to be found?

like image 824
maja Avatar asked Feb 09 '15 19:02

maja


People also ask

How is floating point value calculated?

To derive the value of the floating-point number, the significand is multiplied by the base raised to the power of the exponent, equivalent to shifting the radix point from its implied position by a number of places equal to the value of the exponent—to the right if the exponent is positive or to the left if the ...

How accurate are floating point numbers?

With a data type, there is a limited number of bits. Those bits cannot accurately represent a value that requires more than that number of bits. The data type float has 24 bits of precision. This is equivalent to only about 7 decimal places.

Why is 0.1 0.2 === 0.3 false and how can you ensure precise decimal arithmetics?

For example, 0.1 and 0.2 cannot be represented precisely. Hence, 0.1 + 0.2 === 0.3 yields false. To really understand why 0.1 cannot be represented properly as a 32-bit floating-point number, you must understand binary. Representing many decimals in binary requires an infinite number of digits.

How do you calculate floating point error?

Another way to measure the difference between a floating-point number and the real number it is approximating is relative error, which is simply the difference between the two numbers divided by the real number. For example the relative error committed when approximating 3.14159 by 3.14 × 100 is . 00159/3.14159 . 0005.


1 Answers

The first loop determines the number of bits contributing to the significand by finding the least power 2 such that adding 1 to it (using floating-point arithmetic) fails to change its value. If that's the nth power of two, then the significand uses n bits, because with n bits you can express all the integers from 0 through 2^n - 1, but not 2^n. The floating-point representation of 2^n must therefore have an exponent large enough that the (binary) units digit is not significant.

By that same token, having found the first power of 2 whose float representation has worse than unit precision, the maximim float value that does have unit precision is one less. That value is recorded in variable flt_m.

The second loop then tests for the maximum exponent by starting with the maximum unit-precision value, and repeatedly doubling it (thereby increasing the exponent by 1) until it finds that the result cannot be converted back by halving it. The maximum float is the value before that final doubling.

Do note, by the way, that all the above supposes a base-2 floating-point representation. You are unlikely to run into anything different, but C does not actually require any specific representation.

With respect to the second part of your question,

does this program gets more precise as longer and non-integer numbers have to be found?

the program takes care to avoid losing precision. It does assume a binary floating-point representation such as you described, but it will work correctly regardless of the number of bits in the significand or exponent of such a representation. No non-integers are involved, but the program already deals with numbers that have worse than unit precision, and with numbers larger than can be represented with type int.

like image 97
John Bollinger Avatar answered Oct 11 '22 16:10

John Bollinger