I am building a web app in Java that does math and shows steps to the user. When doing basic arithmetic with decimals I often get the messy in accurate outputs.
Here is my problem:
double a = 0.15;
double b = 0.01;
System.out.println(a - b);
// outputs 0.13999999999999999
float a = 0.15;
float b = 0.01;
System.out.println(a - b);
// outputs 0.14
float a = 0.16f;
float b = 0.01f;
System.out.println(a - b);
// outputs 0.14999999
double a = 0.16;
double b = 0.01;
System.out.println(a - b);
// outputs 0.15
Neither is reliable for complete
accuracy. Is there a numeric class that is more precise or should I just round the values off?
You can use BigDecimal for this. It's ugly, but it works:
BigDecimal a = new BigDecimal("0.15");
BigDecimal b = new BigDecimal("0.01");
System.out.println(a.subtract(b));
Be sure to construct them either with a String parameter, or with the valueOf
method, like this:
BigDecimal x = new BigDecimal("0.15"); // This is ok
BigDecimal x = BigDecimal.valueOf(0.15); // This is also ok
And not with a double parameter, like this:
BigDecimal x = new BigDecimal(0.15); // DON'T DO THIS
Because if you pass in a double, you will also pass in double's inaccuracy into the new BigDecimal instance. If you pass in a String, BigDecimal will know™ and do the right thing™.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With