Do we still need to emulate 128bit integers in software, or is there hardware support for them in your average desktop processor these days?
The x86-64 instruction set can do 64-bit*64-bit to 128-bit using one instruction (mul
for unsigned imul
for signed each with one operand) so I would argue that to some degree that the x86 instruction set does include some support for 128-bit integers.
If your instruction set does not have an instruction to do 64-bit*64-bit to 128-bit then you need several instructions to emulate this.
This is why 128-bit * 128-bit to lower 128-bit operations can be done with few instructions with x86-64. For example with GCC
__int128 mul(__int128 a, __int128 b) {
return a*b;
}
produces this assembly
imulq %rdx, %rsi
movq %rdi, %rax
imulq %rdi, %rcx
mulq %rdx
addq %rsi, %rcx
addq %rcx, %rdx
which uses one 64-bit * 64-bit to 128-bit instructions, two 64-bit * 64-bit to lower 64-bit instructions, and two 64-bit additions.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With