In assembly programming, it's fairly common to want to compute something from the low bits of a register that isn't guaranteed to have the other bits zeroed. In higher level languages like C, you'd simply cast your inputs to the small size and let the compiler decide whether it needs to zero the upper bits of each input separately, or whether it can chop off the upper bits of the result after the fact.
This is especially common for x86-64 (aka AMD64), for various reasons1, some of which are present in other ISAs.
I'll use 64bit x86 for examples, but the intent is to ask about/discuss 2's complement and unsigned binary arithmetic in general, since all modern CPUs use it. (Note that C and C++ don't guarantee two's complement4, and that signed overflow is undefined behaviour.)
As an example, consider a simple function that can compile to an LEA
instruction2. (In the x86-64 SysV(Linux) ABI3, the first two function args are in rdi
and rsi
, with the return in rax
. int
is a 32bit type.)
; int intfunc(int a, int b) { return a + b*4 + 3; }
intfunc:
lea eax, [edi + esi*4 + 3] ; the obvious choice, but gcc can do better
ret
gcc knows that addition, even of negative signed integers, carries from right to left only, so the upper bits of the inputs can't affect what goes into eax
. Thus, it saves an instruction byte and uses lea eax, [rdi + rsi*4 + 3]
And why does it work?
1 Why this comes up frequently for x86-64: x86-64 has variable-length instructions, where an extra prefix byte changes the operand size (from 32 to 64 or 16), so saving a byte is often possible in instructions that are otherwise executed at the same speed. It also has false-dependencies (AMD/P4/Silvermont) when writing the low 8b or 16b of a register (or a stall when later reading the full register (Intel pre-IvB)): For historical reasons, only writes to 32b sub-registers zero the rest of the 64b register. Almost all arithmetic and logic can be used on on the low 8, 16, or 32bits, as well as the full 64bits, of general-purpose registers. Integer vector instructions are also rather non-orthogonal, with some operations not available for some element sizes.
Furthermore, unlike x86-32, the ABI passes function args in registers, and upper bits aren't required to be zero for narrow types.
2 LEA: Like other instructions, the default operand size of LEA is 32bit, but the default address size is 64bit. An operand-size prefix byte (0x66
or REX.W
) can make the output operand size 16 or 64bit. An address-size prefix byte (0x67
) can reduce the address size to 32bit (in 64bit mode) or 16bit (in 32bit mode). So in 64bit mode, lea eax, [edx+esi]
takes one byte more than lea eax, [rdx+rsi]
.
It is possible to do lea rax, [edx+esi]
, but the address is still only computed with 32bits (a carry doesn't set bit 32 of rax
). You get identical results with lea eax, [rdx+rsi]
, which is two bytes shorter. Thus, the address-size prefix is never useful with LEA
, as the comments in disassembly output from Agner Fog's excellent objconv disassembler warn.
3 x86 ABI:
The caller doesn't have to zero (or sign-extend) the upper part of 64bit registers used to pass or return smaller types by value. A caller that wanted to use the return value as an array index would have to sign-extend it (with movzx rax, eax
, or the special-case-for-eax instruction cdqe
. (not to be confused with cdq
, which sign-extends eax
into edx:eax
e.g. to set up for idiv
.))
This means a function returning unsigned int
can compute its return value in a 64bit temporary in rax
, and not require a mov eax, eax
to zero the upper bits of rax
. This design decision works well in most cases: often the caller doesn't need any extra instructions to ignore the undefined bits in the upper half of rax
.
C and C++ specifically do not require two's complement binary signed integers (except for C++ std::atomic
types). One's complement and sign/magnitude are also allowed, so for fully portable C, these tricks are only useful with unsigned
types. Obviously for signed operations, a set sign-bit in sign/magnitude representation means the other bits are subtracted, rather than added, for example. I haven't worked through the logic for one's complement
However, bit-hacks that only work with two's complement are widespread, because in practice nobody cares about anything else. Many things that work with two's complement should also work with one's complement, since the sign bit still doesn't change the interpretation of the other bits: it just has a value of -(2N-1) (instead of 2N). Sign/magnitude representation does not have this property: the place value of every bit is positive or negative depending on the sign bit.
Also note that C compilers are allowed to assume that signed overflow never happens, because it's undefined behaviour. So e.g. compilers can and do assume (x+1) < x
is always false. This makes detecting signed overflow rather inconvenient in C. Note that the difference between unsigned wraparound (carry) and signed overflow.
However the values are split between positive and negative numbers. For example, a 4-bit unsigned number represents 16 values: 0 to 15. A 4-bit two's complement number also represents 16 values: −8 to 7. In general, the range of an N-bit two's complement number spans [−2N−1, 2N−1 − 1].
To get 2's complement of binary number is 1's complement of given number plus 1 to the least significant bit (LSB). For example 2's complement of binary number 10010 is (01101) + 1 = 01110.
With the two most common representations, the range is 0 through 4,294,967,295 (232 − 1) for representation as an (unsigned) binary number, and −2,147,483,648 (−231) through 2,147,483,647 (231 − 1) for representation as two's complement.
*scale
in [reg1 + reg2*scale + disp]
)LEA
instructions: the address-size prefix is never needed. Just use the desired operand-size to truncate if needed.)The low half of a multiply. e.g. 16b x 16b -> 16b can be done with a 32b x 32b -> 32b. You can avoid LCP stalls (and partial-register problems) from imul r16, r/m16, imm16
by using a 32bit imul r32, r/m32, imm32
and then reading only the low 16 of the result. (Be careful with wider memory refs if using the m32
version, though.)
As pointed out by Intel's insn ref manual, the 2 and 3 operand forms of imul
are safe for use on unsigned integers. The sign bits of the inputs don't affect the N bits of the result in a N x N -> N
bit multiply.)
x
): Works at least on x86, where the shift count is masked, rather than saturated, down to the width of the operation, so high garbage in ecx
, or even the high bits of cl
, don't affect the shift count. Also applies to BMI2 flagless shifts (shlx
etc), but not to vector shifts (pslld xmm, xmm/m128
etc, which saturate the count). Smart compilers optimize away masking of the shift count, allowing for a safe idiom for rotates in C (no undefined behaviour).Obviously flags like carry/overflow / sign / zero will all be affected by garbage in high bits of a wider operation. x86's shifts put the last bit shifted out into the carry flag, so this even affects shifts.
full multiplication: e.g. for 16b x 16b -> 32b, ensure the upper 16 of the inputs are zero- or sign-extended before doing a 32b x 32b -> 32b imul
. Or use a 16bit one-operand mul
or imul
to inconveniently put the result in dx:ax
. (The choice of signed vs. unsigned instruction will affect the upper 16b in the same way as zero- or sign-extending before a 32b imul
.)
memory addressing ([rsi + rax]
): sign or zero-extend as needed. There is no [rsi + eax]
addressing mode.
division and remainder
Two's complement, like unsigned base 2, is a place-value system. The MSB for unsigned base2 has a place value of 2N-1 in an N bit number (e.g. 231). In 2's complement, the MSB has a value of -2N-1 (and thus works as a sign bit). The wikipedia article explains many other ways of understanding 2's complement and negating an unsigned base2 number.
The key point is that having the sign bit set doesn't change the interpretation of the other bits. Addition and subtraction work exactly the same as for unsigned base2, and it's only the interpretation of the result that differs between signed and unsigned. (E.g. signed overflow happens when there's a carry into but not out of the sign bit.)
In addition, carry propagates from LSB to MSB (right to left) only. Subtraction is the same: regardless of whether there is anything in the high bits to borrow, the low bits borrow it. If that causes an overflow or carry, only the high bits will be affected. E.g.:
0x801F
-0x9123
-------
0xeefc
The low 8 bits, 0xFC
, don't depend on what they borrowed from. They "wrap around" and pass on the borrow to the upper 8 bits.
So addition and subtraction have the property that the low bits of the result don't depend on any upper bits of the operands.
Since LEA
only uses addition (and left-shift), using the default address-size is always fine. Delaying truncation until the operand-size comes into play for the result is always fine.
(Exception: 16bit code can use an address-size prefix to do 32bit math. In 32bit or 64bit code, the address-size prefix reduces the width instead of increasing.)
Multiplication can be thought of as repeated addition, or as shifting and addition. The low half isn't affected by any upper bits. In this 4-bit example, I've written out all the bit-products that are summed into the low 2 result bits. Only the low 2 bits of either source are involved. It's clear that this works in general: Partial products are shifted before addition, so high bits in the source never affect lower bits in the result in general.
See Wikipedia for a larger version of this with much more detailed explanation. There are many good google hits for binary signed multiplication, including some teaching material.
*Warning*: This diagram is probably slightly bogus.
ABCD A has a place value of -2^3 = -8
* abcd a has a place value of -2^3 = -8
------
RRRRrrrr
AAAAABCD * d sign-extended partial products
+ AAAABCD * c
+ AAABCD * b
- AABCD * a (a * A = +2^6, since the negatives cancel)
----------
D*d
^
C*d+D*c
Doing a signed multiply instead of an unsigned multiply still gives the same result in the low half (the low 4 bits in this example). Sign-extension of the partial products only happens into the upper half of the result.
This explanation is not very thorough (and maybe even has mistakes), but there is good evidence that it is true and safe to use in production code:
gcc uses imul
to compute the unsigned long
product of two unsigned long
inputs. See an example of this of gcc taking advantage of LEA for other functions on the Godbolt compiler explorer.
Intel's insn ref manual says:
The two- and three-operand forms may also be used with unsigned operands because the lower half of the product is the same regardless if the operands are signed or unsigned. The CF and OF flags, however, cannot be used to determine if the upper half of the result is non-zero.
imul
, not mul
.Obviously the bitwise binary logical operations (and/or/xor/not) treat each bit independently: the result for a bit position depends only on the inputs value at that bit position. Bit-shifts are also rather obvious.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With