I was debating whether to use BigDecimal and BigInteger or only BigDecimal to make my life easier and less converting back and forth. Is there a downside to only using BigDecimal in regards to resources?
How about using only primitive data types and BigInteger or only BigInteger to make my life easier and less converting back and forth?
A BigDecimal is an accurate way of expressing numbers. A Double has a reliable accuracy. Going with doubles of various magnitudes (say d1=1000.0 and d2=0.001) could occur in the 0.001 being dropped collectively when summing as the variation in magnitude is so large. With BigDecimal this would not occur.
According to my own benchmarking for my specific use case it's 10 - 20x slower than double (much better than 1000x) - basically for addition / multiplication.
Explanation: BigDecimal provides more precision as compared to double. Double is faster in terms of performance as compared to BigDecimal.
If you need to use division in your arithmetic, you need to use double instead of BigDecimal.
From: http://www.javamex.com/tutorials/math/BigDecimal_BigInteger_performance.shtml
Note that a BigDecimal is essentially a wrapper around a BigInteger that "remembers where the decimal point is".
Because of the way we are used to dealing with numbers, as humans, we may have to "think a bit more" when calculating with non-integers by hand compared to integers (e.g. we may have learnt the 7 times table by heart to speed up certain calculations, but probably not the .7 times table). But to a computer implementation, it is essentially no more effort to manipulate non-integers than it is integers, and so on the whole, methods on BigDecimal tend to perform similarly to analogous methods on BigInteger.
I know the title of your question states "Performace of BigDecimal vs. BigInteger and BigDecimal" but one of your questions states "How about using only primitive data types and BigInteger or only BigInteger to make ". Well here is what I recommend:
If you need accuracy, precision and you have large numbers to work with such as huge financial data then it may be best to use BigDecimal and BigInteger instead of primitive types. Few other characteristics of BigDecimal and BigInteger are also:
Now back to your main question in terms of performance, there is not much of difference in terms of performance between the two. If you can present your information in BigDecimal just use that so you don't have to convert it over to BigInteger but again look at the program and see if it does make sense to only use BigDecimal all the time. The main difference between the BigInteger and BigDecimal is that BigInteger supports arbitrary-precision integers and BigDecimal is for arbitrary-precision fixed-point numbers.
Here are my sources and good luck :)
NY University
Colorado University
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With