I realized the other day that most common lisp had 128-bit "long-floats". As a result, the most positive long float is:
8.8080652584198167656 * 10^646456992
while the most positive double float is 1.7976931348623157 * 10^308
, which is pretty big already.
I wanted to know whether anyone had ever needed a number bigger than 1.7976931348623157 * 10^308
, and if so, in which condition?
Do you feel it is useful to have by default in a programming language?
Is the precision of a 64-bit double float not enough in some circumstances? I would love to hear use-cases.
I guess the advantage of long floats is not only that they can span huge ranges, which may or may not be useful, they probably also have a much larger mantissa (I refuse to use the word "significand" for this) than a double, which gives your numbers a higher precision.
But as someone said, scientists love those types. Probably for the above reason. Note that the libraries are often called arbitrary precision libraries.
Scientists use this kind of stuff - and occasionally arbitrarily sized integers/floats/decimals.
For you, 32-bit or 64-bit is usually enough.
See also:
http://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With