I need to encode a BigDecimal
compactly into a ByteBuffer
to replace my current (rubbish) encoding scheme (writing the BigDecimal
as a UTF-8 encoded String
prefixed with a byte denoting the String
length).
Given that a BigDecimal
is effectively an integer value (in the mathematical sense) and an associated scale I am planning to write the scale as a single byte followed by a VLQ encoded integer. This should adequately cover the range of expected values (i.e. max scale 127).
My question: When encountering large values such as 10,000,000,000 it is clearly optimal to encode this as the value: 1 with a scale of -10 rather than encoding the integer 10,000,000,000 with a scale of 0 (which will occupy more bytes). How can I determine the optimal scale for a given BigDecimal
? ... In other words, how I can determine the minimum possible scale I set assign a BigDecimal
without having to perform any rounding?
Please do not reference the term "premature optimisation" in your answers :-)
A BigDecimal consists of an arbitrary precision integer unscaled value and a 32-bit integer scale. If zero or positive, the scale is the number of digits to the right of the decimal point. If negative, the unscaled value of the number is multiplied by ten to the power of the negation of the scale.
By default, BigDecimal numbers have “unlimited” precision. In fact, the maximum unscaled value is equal to 2^Integer.
This limits it to 15 to 17 decimal digits of accuracy. BigDecimal can grow to any size you need it to. Double operates in binary which means it can only precisely represent numbers which can be expressed as a finite number in binary. For example, 0.375 in binary is exactly 0.011.
The largest value BigDecimal can represent requires 8 GB of memory.
BigDecimal#stripTrailingZeros seems to be doing this.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With