A 64-bit register can hold any of 264 (over 18 quintillion or 1.8×1019) different values. The range of integer values that can be stored in 64 bits depends on the integer representation used.
The 128-bit data type can handle up to 31 significant digits (compared to 17 handled by the 64-bit long double). However, while this data type can store numbers with more precision than the 64-bit data type, it does not store numbers of greater magnitude.
Software. In the same way that compilers emulate e.g. 64-bit integer arithmetic on architectures with register sizes less than 64 bits, some compilers also support 128-bit integer arithmetic. For example, the GCC C compiler 4.6 and later has a 128-bit integer type __int128 for some architectures.
All Rust's integer types are compiled to LLVM integers. The LLVM abstract machine allows integers of any bit width from 1 to 2^23 - 1.* LLVM instructions typically work on integers of any size.
Obviously, there aren't many 8388607-bit architectures out there, so when the code is compiled to native machine code, LLVM has to decide how to implement it. The semantics of an abstract instruction like add
are defined by LLVM itself. Typically, abstract instructions that have a single-instruction equivalent in native code will be compiled to that native instruction, while those that don't will be emulated, possibly with multiple native instructions. mcarton's answer demonstrates how LLVM compiles both native and emulated instructions.
(This doesn't only apply to integers that are larger than the native machine can support, but also to those that are smaller. For example, modern architectures might not support native 8-bit arithmetic, so an add
instruction on two i8
s may be emulated with a wider instruction, the extra bits discarded.)
Does the compiler somehow use 2 registers for one
i128
value? Or are they using some kind of big integer struct to represent them?
At the level of LLVM IR, the answer is neither: i128
fits in a single register, just like every other single-valued type. On the other hand, once translated to machine code, there isn't really a difference between the two, because structs may be decomposed into registers just like integers. When doing arithmetic, though, it's a pretty safe bet that LLVM will just load the whole thing into two registers.
* However, not all LLVM backends are created equal. This answer relates to x86-64. I understand that backend support for sizes larger than 128 and non-powers of two is spotty (which may partly explain why Rust only exposes 8-, 16-, 32-, 64-, and 128-bit integers). According to est31 on Reddit, rustc implements 128 bit integers in software when targeting a backend that doesn't support them natively.
The compiler will store these in multiple registers and use multiple instructions to do arithmetic on those values if needed. Most ISAs have an add-with-carry instruction like x86's adc
which makes it fairly efficient to do extended-precision integer add/sub.
For example, given
fn main() {
let a = 42u128;
let b = a + 1337;
}
the compiler generates the following when compiling for x86-64 without optimization:
(comments added by @PeterCordes)
playground::main:
sub rsp, 56
mov qword ptr [rsp + 32], 0
mov qword ptr [rsp + 24], 42 # store 128-bit 0:42 on the stack
# little-endian = low half at lower address
mov rax, qword ptr [rsp + 24]
mov rcx, qword ptr [rsp + 32] # reload it to registers
add rax, 1337 # add 1337 to the low half
adc rcx, 0 # propagate carry to the high half. 1337u128 >> 64 = 0
setb dl # save carry-out (setb is an alias for setc)
mov rsi, rax
test dl, 1 # check carry-out (to detect overflow)
mov qword ptr [rsp + 16], rax # store the low half result
mov qword ptr [rsp + 8], rsi # store another copy of the low half
mov qword ptr [rsp], rcx # store the high half
# These are temporary copies of the halves; probably the high half at lower address isn't intentional
jne .LBB8_2 # jump if 128-bit add overflowed (to another not-shown block of code after the ret, I think)
mov rax, qword ptr [rsp + 16]
mov qword ptr [rsp + 40], rax # copy low half to RSP+40
mov rcx, qword ptr [rsp]
mov qword ptr [rsp + 48], rcx # copy high half to RSP+48
# This is the actual b, in normal little-endian order, forming a u128 at RSP+40
add rsp, 56
ret # with retval in EAX/RAX = low half result
where you can see that the value 42
is stored in rax
and rcx
.
(editor's note: x86-64 C calling conventions return 128-bit integers in RDX:RAX. But this main
doesn't return a value at all. All the redundant copying is purely from disabling optimization, and that Rust actually checks for overflow in debug mode.)
For comparison, here is the asm for Rust 64-bit integers on x86-64 where no add-with-carry is needed, just a single register or stack-slot for each value.
playground::main:
sub rsp, 24
mov qword ptr [rsp + 8], 42 # store
mov rax, qword ptr [rsp + 8] # reload
add rax, 1337 # add
setb cl
test cl, 1 # check for carry-out (overflow)
mov qword ptr [rsp], rax # store the result
jne .LBB8_2 # branch on non-zero carry-out
mov rax, qword ptr [rsp] # reload the result
mov qword ptr [rsp + 16], rax # and copy it (to b)
add rsp, 24
ret
.LBB8_2:
call panic function because of integer overflow
The setb / test is still totally redundant: jc
(jump if CF=1) would work just fine.
With optimization enabled, the Rust compiler doesn't check for overflow so +
works like .wrapping_add()
.
Yes, just the same way as 64-bit integers on 32-bit machines were handled, or 32-bit integers on 16-bit machines, or even 16- and 32-bit integers on 8-bit machines (still applicable to microcontrollers!). Yes, you store the number in two registers, or memory locations, or whatever (it doesn't really matter). Addition and subtraction are trivial, taking two instructions and using the carry flag. Multiplication requires three multiplies and some additions (it's common for 64-bit chips to already have a 64x64->128 multiply operation that outputs to two registers). Division... requires a subroutine and is quite slow (except in some cases where division by a constant can be transformed into a shift or a multiply), but it still works. Bitwise and/or/xor merely have to be done on the top and bottom halves separately. Shifts can be accomplished with rotation and masking. And that pretty much covers things.
To provide perhaps a clearer example, on x86_64, compiled with the -O
flag, the function
pub fn leet(a : i128) -> i128 {
a + 1337
}
compiles to
example::leet:
mov rdx, rsi
mov rax, rdi
add rax, 1337
adc rdx, 0
ret
(My original post had u128
rather than the i128
you asked about. The function compiles the same code either way, a good demonstration that signed and unsigned addition are the same on a modern CPU.)
The other listing produced unoptimized code. It’s safe to step through in a debugger, because it makes sure you can put a breakpoint anywhere and inspect the state of any variable at any line of the program. It’s slower and harder to read. The optimized version is much closer to the code that will actually run in production.
The parameter a
of this function is passed in a pair of 64-bit registers, rsi:rdi. The result is returned in another pair of registers, rdx:rax. The first two lines of code initialize the sum to a
.
The third line adds 1337 to the low word of the input. If this overflows, it carries the 1 in the CPU’s carry flag. The fourth line adds zero to the high word of the input—plus the 1 if it got carried.
You can think of this as simple addition of a one-digit number to a two-digit number
a b
+ 0 7
______
but in base 18,446,744,073,709,551,616. You’re still adding the lowest “digit” first, possibly carrying a 1 to the next column, then adding the next digit plus the carry. Subtraction is very similar.
Multiplication must use the identity (2⁶⁴a + b)(2⁶⁴c + d) = 2¹²⁸ac + 2⁶⁴(ad+bc) + bd, where each of these multiplications returns the upper half of the product in one register and the lower half of the product in another. Some of those terms will be dropped, because bits above the 128th don’t fit into a u128
and are discarded. Even so, this takes a number of machine instructions. Division also takes several steps. For a signed value, multiplication and division would additionally need to convert the signs of the operands and the result. Those operations are not very efficient at all.
On other architectures, it gets easier or harder. RISC-V defines a 128-bit instruction-set extension, although to my knowledge no one has implemented it in silicon. Without this extension, the RISC-V architecture manual recommends a conditional branch: addi t0, t1, +imm; blt t0, t1, overflow
SPARC has control codes like the control flags of x86, but you have to use a special instruction, add,cc
, to set them. MIPS, on the other hand, requires you to check whether the sum of two unsigned integers is strictly less than one of the operands. If so, the addition overflowed. At least you’re able to set another register to the value of the carry bit without a conditional branch.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With