Referring to the documentation of BigDecimal
class,
n,m = a.precs
prec
returns number of significant digits (n
) and maximum number of significant digits (m
) ofa
.
I am puzzled by the following output related to BigDecimal
.
require 'bigdecimal'
BigDecimal.new('1').precs # => [9, 18]
BigDecimal.new(1).precs # => [9, 27]
I cannot figure out why when a String
is passed, the maximum number of significant digits is less compared to when a Fixnum
is passed.
Also will it result in any precision issues?
If you can read C code, you can start at https://github.com/ruby/ruby/blob/trunk/ext/bigdecimal/bigdecimal.c#L2509 - that's the initializer for any BigDecimal
object. If you follow that code to the next method, which is BigDecimal_new you'll notice that when passed an integer argument there are a few more steps to go through before allocating and creating an internal big decimal object as opposed to when passing a string argument.
In any case, you shouldn't worry about loss of precision - the significant digits attributes are more like hints than absolute values. Even the documentation mentions it: The actual number of significant digits used in computation is usually larger than the specified number.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With