I don't understand why are float values different from double values. From the example bellow it appears that float provides different result than double for the same operation:
public class Test {
public static void main(String[] args) {
double a = 99999.8d;
double b = 99999.65d;
System.out.println(a + b);
float a2 = 99999.8f;
float b2 = 99999.65f;
System.out.println(a2 + b2);
}
}
Output:
199999.45
199999.44
Can you explain what makes this difference between float and double?
A float is a 32 bit IEEE 754 floating point.
A double is a 64 bit IEEE 754 floating point.
so it is just a matter of precision because neither of the fraction portions .8 and .65 have a terminating binary representation, so there is some rounding error. the double has more precision so it has slightly less rounding error.
http://docs.oracle.com/javase/tutorial/java/nutsandbolts/datatypes.html
Can you explain what makes this difference between float and double?
Sure. Imagine you had two decimal types, one with five significant digits, and one with ten.
What value would you use to represent pi for each of those types? In both cases you'd be trying to get as close to a number which you couldn't represent exactly - but you wouldn't end up with the same value, would you?
It's the same for float
and double
- both are binary floating point types, but double
has more precision than float
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With