I've noticed substantial pain over this constructor (even here on Stack Overflow). People use it even though the documentation clearly states:
The results of this constructor can be somewhat unpredictable http://java.sun.com/javase/6/docs/api/java/math/BigDecimal.html#BigDecimal(double)
I've even seen a JSR-13 being APPROVED with a recommendation stating:
Existing specifications that might be deprecated: We propose deprecating the BigDecimal(double) constructor, which currently gives results that are different to the Double.toString() method.
Despite all this, the constructor has not yet been deprecated.
I'd love to hear any views on this.
A BigDecimal is an exact way of representing numbers. A Double has a certain precision. Working with doubles of various magnitudes (say d1=1000.0 and d2=0.001 ) could result in the 0.001 being dropped alltogether when summing as the difference in magnitude is so large. With BigDecimal this would not happen.
The BigDecimal class provides operations on double numbers for arithmetic, scale handling, rounding, comparison, format conversion and hashing. It can handle very large and very small floating point numbers with great precision but compensating with the time complexity a bit.
56 if you use BigDecimal newValue = myBigDecimal. setScale(2, RoundingMode. DOWN); " This statement is true for HALF_DOWN not for DOWN mode.
Considering the behavior of BigDecimal(double)
is correct, in my opinion, I'm not too sure it really would be such a problem.
I wouldn't exactly agree with the wording of the documentation in the BigDecimal(double)
constructor:
The results of this constructor can be somewhat unpredictable. One might assume that writing
new BigDecimal(0.1)
in Java creates aBigDecimal
which is exactly equal to0.1
(an unscaled value of1
, with a scale of1
), but it is actually equal to0.1000000000000000055511151231257827021181583404541015625
.
(Emphasis added.)
Rather than saying unpredictable, I think the wording should be unexpected, and even so, this would be unexpected behavior for those who are not aware of the limitations of representation of decimal numbers with floating point values.
As long as one keeps in mind that floating point values cannot represent all decimal values with precision, the value returned by using BigDecimal(0.1)
being 0.1000000000000000055511151231257827021181583404541015625
actually makes sense.
If the BigDecimal
object instantiated by the BigDecimal(double)
constructor is consistent, then I would argue that the result is predictable.
My guess as to why the BigDecimal(double)
constructor is not being deprecated is because the behavior can be considered correct, and as long as one knows how floating point representations work, the behavior of the constructor is not too surprising.
Deprecation is deprecated. Parts of APIs are only marked deprecated in exceptional cases.
So, run FindBugs as part of your build process. FindBugs has a detector PlugIn API and is also open source (LGPL, IIRC).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With