Haskell has a built-in Integer
type, which handles arbitrary-precision integers. There is also Rational
, which is an arbitrary-precision fraction. But arithmetic on such things requires finding a common denominator, and then cancelling the result down to least terms.
What if I wanted to do floating-point arithmetic with (say) 100 bits of precision in the mantissa? How would I do that?
I see there's a Data.Fixed
module, but that seems to provide a handful of custom-written types with fixed precision. What I want is something where I can dynamically increase or decrease precision at run-time, according to how much accuracy is required for each task.
PS. I'm not looking for decimal arithmetic, although I suppose it would be interesting to know whether that's available somewhere...
Any number that looks like an integer in a source or data file is stored as an arbitrary-precision integer. The size of the integer is limited only by the available memory.
In computer science, arbitrary-precision arithmetic, also called bignum arithmetic, multiple-precision arithmetic, or sometimes infinite-precision arithmetic, indicates that calculations are performed on numbers whose digits of precision are limited only by the available memory of the host system.
In an arbitrary precision library, there's no fixed limit on the number of base types used to represent our numbers, just whatever memory can hold. Addition for example: 123456 + 78 : 12 34 56 78 -- -- -- 12 35 34.
But when you work on a computer, the last thing on earth you're ever going to see is a real number, because a real number has an infinite number of digits of precision, and computers only have finite precision.
Try Data.Number.CReal
from the numbers package. It gives you the precision you ask for when converting to a string.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With