I'm thinking about different ways to implement arbitrary-precision arithmetic (sometimes called Bignum, Integer or BigInt).
It seems like the common idiom is to use an array for the storage of the actual value and reallocate it as needed if space requirements grow or shrink.
More precisely, it seems that the bit size of the array elements is often the second largest size commonly supported (to make calculations with overflow easier to implement probably?), e. g. language/platform supports 128bit-sized numbers -> array of 64bit numbers + 128bit variable to handle overflow.
Are there fundamentally different ways to implement arbitrary-precision arithmetic or is the above the “tried and true” way to implement it without huge performance losses?
My question is about the underlying data structure, not the algorithms for operations. I know Karatsuba, Toom-Cook et alii.
In computer science, arbitrary-precision arithmetic, also called bignum arithmetic, multiple-precision arithmetic, or sometimes infinite-precision arithmetic, indicates that calculations are performed on numbers whose digits of precision are limited only by the available memory of the host system.
Generally, In low-level languages like C, the precision of integers is limited to 64-bit, but Python implements Arbitrary-precision integers. Since Python 3 all integers are represented as a bignum and these are limited only by the available memory of the host system.
Arbitrary-Precision arithmetic, also known as "bignum" or simply "long arithmetic" is a set of data structures and algorithms which allows to process much greater numbers than can be fit in standard data types.
It is possible to use the Chinese Remainder Theorem to represent large integers in a fundamentally different way from the usual base-2^n system.
I believe a CRT-based representation will still use an array of elements which, like the conventional representation, are based on the most convenient native arithmetic available. However, these elements hold the remainders of the number when divided by a sequence of primes, not base-2^n digits.
As with the conventional representation, the number of elements used determines the maximum size of the representable number. Unfortunately, it is not easy to compute whether one CRT-based number is greater than another, so it is hard to tell if your representation has overflowed the maximum size. Note that addition and multiplication are very fast in CRT representation, which could be an advantage if you can deal with the overflow issue.
However, to answer your question: I believe it is accurate to say that the base-2^n system is indeed the "tried and true" representation, which is used by most popular bignum libraries. I think I recall that there are extant CRT-based bignum libraries, although I have not checked lately to see if they are still around....
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With