Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Precision of Java math functions

Tags:

java

I was coding some floating point intensive algorithm and couldn't get the outcomes as expected (actually I am trying to migrate my old C program into Java). Still a newbie in Java, 2 months old! ;) So I made some quick test on math function and found this:

System.out.printf("Math.PI^1 = %.22f MAth.PI^10 = %.22f \n", 
    Math.pow(Math.PI, 1.0), Math.pow(Math.PI, 10.0));

And the output is as follow:

Math.PI^1 = 3.1415926535897930000000 MAth.PI^10 = 93648.0474760829800000000000 

As a reference, the Calculator gives pi^10 as follow:

93,648.047476083020973716690184919 

, which is what I expected to get. Minor error is Ok, as the output from C math function (Cygwin environment) gives as follow:

printf("M_PI^1 = %.22f M_PI^10 = %.22f \n", pow(M_PI,1.0), pow(M_PI, 10.0));

$ ./a.exe
M_PI^1 = 3.1415926535897931159980 M_PI^10 = 93648.0474760829820297658443

Have I missed something critical? Or any Java configuration I didn't specify? Yes, I've searched and found about StrictMath and strictfp keyword to use. But still they give the same outcome in Java.

like image 505
Lamina R. Schwank Avatar asked Aug 25 '15 13:08

Lamina R. Schwank


People also ask

What is precision in Java?

The precision is the number of digits in the unscaled value. The precision of a zero value is 1.

How do I set precision in BigDecimal?

To change a BigDecimal's precision, use BigDecimal. round(new MathContext(precision, roundingMode)) . To change a BigDecimal's scale, use BigDecimal. setScale(scale, roundingMode) .

What is the range of BigDecimal in Java?

In java, BigDecimal consists of a random precision integer scale and a 32-bit integer scale. If positive or zero, the scale is the number of digits to the right of the decimal point. If less than zero, the unscaled value of the number is multiplied by ten to the power of the negation of the scale(10^(-scale)).


1 Answers

float has about 7.2 significative decimal digits.
double has about 15.9 significative decimal digits.

Your examples have the same 16 first significative decimal digits. This means that both pieces of code are getting exactly the same binary result. Assuming that your C compiler uses the same IEEE standard for 64 bit floats as java which is likely.

The difference you see after those 16 digits does not come from the way the math operation is performed and does not come from rounding error but from the way the different print functions deal with converting from binary double to decimal text.

like image 79
Anonymous Coward Avatar answered Sep 20 '22 19:09

Anonymous Coward