Has arbitrary-precision arithmetic affected numerical analysis software?
I feel that most numerical analysis software keeps on using the same floats and doubles.
If I'm right, I'd love to know the reason, as in my opinion there are some calculations that can benefit from the use of arbitrary-precision arithmetic, particularly when it is combined with the use of rational number representation, as been done on the GNU Multi-Precision Library.
If I'm wrong, examples would be nice.
Arbitrary precision is slow. Very slow. And the moment you use a function that produces an irrational value (such as most trig functions), you lose your arbitrary precision advantage.
So if you don't need, or can't use that precision, why spend all that CPU time on it?
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With