Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is there any reason that default decimal literal is not BigDecimal type in Clojure?

I learned that Clojure reader interprets decimal literal with suffix 'M', like 1.23M, as BigDecimal. And I also know that decimal numbers with no 'M' become Java double.
But I think it would be better that normal decimal number is BigDecimal, and host-dependent decimal has suffix, like 1.23H. So when the number is corrupted or truncated because of the precision limit of IEEE double, we can easily notice that the number is precision-limited. Also, I think easier expression should be host-independent.

Is there any reason that Clojure interprets literal decimal as Java double, other than time performance? Also, I don't think time performance is an answer, because it's not C/C++, and other way to declare host-dependent decimal can be implemented just like '1.23H'.

like image 458
burrownn Avatar asked Dec 22 '15 15:12

burrownn


2 Answers

Once up on a time, for integers, Clojure would auto-promote to larger sizes when needed. This was changed so that overflow exceptions are thrown. My sense, from afar was that:

  1. The powers that be meant for Clojure to be a practical language doing practical things in a practical amount of time. They didn't want performance to blow up because number operations were unexpectedly using arbitrary precision libraries instead of CPU integer operations. Contrast to scheme that seems to prioritize mathematical niceness over practicality.
  2. People did not like being surprised at run time when inter-op calls would fail because the Java library expected a 32 bit integer instead of an arbitrary sized integer.

So it was decided that the default was to use normal integers (I think Java longs?) and only use arbitrarily large integers when the programmer called for it, when the programmer knowingly decided that they were willing to take the performance hit, and the inter-op hit.

My guess is similar decisions where made for numbers with decimal points.

like image 86
Shannon Severance Avatar answered Nov 12 '22 16:11

Shannon Severance


Performance could be one thing. Perhaps clojure.core developers could chime in regarding the reasons.

I personally think it is not so much of a big deal not to have bigdecimal by default, since :

  • there a literal for that as you point out : M
  • there are operations like +', *', -'... (note the quote) that "support arbitrary precision".
like image 4
nha Avatar answered Nov 12 '22 16:11

nha