How do hig-performance native big-integer libraries on x86-64 represent a big integer in memory? (or does it vary? Is there a most common way?)
Naively I was thinking about storing them as 0-terminated strings of numbers in base 264.
For example suppose X
is in memory as:
[8 bytes] Dn
.
.
[8 bytes] D2
[8 bytes] D1
[8 bytes] D0
[8 bytes] 0
Let B = 264
Then
X = Dn * Bn + ... + D2 * B2 + D1 * B1 + D0
The empty string (i.e. 8 bytes of zero) means zero.
Is this a reasonable way? What are the pros and cons of this way? Is there a better way?
How would you handle signedness? Does 2's complement work with this variable length value?
(Found this: http://gmplib.org/manual/Integer-Internals.html Whats a limb?)
I would think it would be as an array lowest value to highest. I implemented addition of arbitrary sized numbers in assembler. The CPU provides the carry flag that allows you to easily perform these sorts of operations. You write a loop that performs the operation in byte size chunks. The carry flag is included in the next operation using the "Add with carry" instruction (ADC opcode).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With