Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

working decimal point numbers

I wrote a program in java about floating point but the output obtained is not an accurate one; whereas the same program written in C did produce an accurate result.

Code Snippet:

public class One {

public static void main(String...args){

    double i, num=0.0;
    for(i=0;i<10;i++)
    num=num+0.1;
    System.out.println(num);

    }
}

output: 0.9999999999999999

The program in C goes like this

#include<stdio.h>
main()
{

double i,num=0.0;
for(i=0;i<10;i++)
num=num+0.1;

printf("%f",num);

}

output:1.00000

What am I doing wrong?

like image 622
user3086472 Avatar asked Feb 14 '23 10:02

user3086472


1 Answers

C is hiding the problem not removing it

The idea that the C program is more accurate is a misunderstanding of what is happening. Both have imprecise answers but then by default C has rounded to 6 significant figures. Hiding the tiny error. Were you to actually use the value in a calculation (for example num==1) you'd find that both are inaccurate.

Usually it doesn't matter for well written programs

In general intelligently written programs can cope with this tiny error without difficulty. For example your program can be rewritten to recreate a double each time

double i, num=0.0;
for(i=0;i<10;i++)
   num=0.1*i;
   System.out.println(num);

}

Meaning that the error does not grow. Additionally you should never use == with doubles as the tiny inaccuracy can be visible there.

In the very very occasional occurrences where this tiny error is a problem (currency programs being the most common); bigDecimal can be used.

This isn't a problem with floating point numbers but with the conversion from base 10 to base 2.

There are fractions in base 10 that cannot be expressed exactly; for example 1/3. Similarly there are fractions that cannot be expressed exactly within binary for example 1/10. It is from this perspective that you should look at this

The problem in this case was that when you wrote "0.1", a base 10 number the computer had to convert that to a binary number, 0.1=(binary)0.00011001100110011001100110011001....... forever but because it couldn't exactly represent that in binary with the space it had it ended up as (binary)0.0001100110011001. A binary friendly number (such as 1/2) would be completely accurate until the double ran out of precision digits (at which point even binary friendly numbers couldn't be exactly represented)

1 number of decimal places not accurate

like image 170
Richard Tingle Avatar answered Feb 17 '23 02:02

Richard Tingle