I'm working with money so I need my results to be accurate but I only need a precision of 2 decimal points (cents). Is BigDecimal needed to guarantee results of multiplication/division are accurate?
The main disadvantage is BigDecimal is slower than double. So if we have a system where low latency is crucial than the decimal part of a number, we should go for double. But in financial or any other systems where each digit of decimal part are important, BigDecimal should be chosen over double/float.
If you need to use division in your arithmetic, you need to use double instead of BigDecimal.
BigDecimal precision is de facto unlimited since it is based on an int array of arbitrary length. Though operations with double are much faster than with BigDecimal this data type should never be used for precise values, such as currency.
The BigDecimal class provides operations on double numbers for arithmetic, scale handling, rounding, comparison, format conversion and hashing. It can handle very large and very small floating point numbers with great precision but compensating with the time complexity a bit.
BigDecimal is a very appropriate type for decimal fraction arithmetic with a known number of digits after the decimal point. You can use an integer type and keep track of the multiplier yourself, but that involves doing in your code work that could be automated.
As well as managing the digits after the decimal point, BigDecimal will also expand the number of stored digits as needed - many business and government financial calculations involve sums too large to store in cents in an int.
I would consider avoiding it only if you need to store a very large array of amounts of money, and are short of memory.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With