I was trying to understand how Address Computation Instruction works, especially with leaq
command. Then I get confused when I see examples using leaq
to do arithmetic computation. For example, the following C code,
long m12(long x) {
return x*12;
}
In assembly,
leaq (%rdi, %rdi, 2), %rax
salq $2, $rax
If my understanding is right, leaq should move whatever address (%rdi, %rdi, 2)
, which should be 2*%rdi+%rdi
, evaluate to into %rax
. What I get confused is since value x is stored in %rdi
, which is just memory address, why does times %rdi by 3 then left shift this memory address by 2 is equal to x times 12? Isn't that when we times %rdi
by 3, we jump to another memory address which does not hold value x?
Which flags are affected after performing LEA instruction? LEA does not affect any flag. This instruction loads new values into the specified register and into the DS register from four successive memory locations.
The LEA (Load Effective Address) instruction is a way of obtaining the address which arises from any of the Intel processor's memory addressing modes. it moves the contents of the designated memory location into the target register.
The lea instruction places the address specified by its first operand into the register specified by its second operand. Note, the contents of the memory location are not loaded, only the effective address is computed and placed into the register.
Load Effective Address calculates its src operand in the same way as the mov instruction does, but rather than loading the contents of that address into the dest operand, it loads the address itself.
lea
(see Intel's instruction-set manual entry) is a shift-and-add instruction that uses memory-operand syntax and machine encoding. This explains the name, but it's not the only thing it's good for. It never actually accesses memory, so it's like using &
in C.
See for example How to multiply a register by 37 using only 2 consecutive leal instructions in x86?
In C, it's like uintptr_t foo = (uintptr_t) &arr[idx]
. Note the &
to give you arr + idx
(scaling for the object size of arr
since this is C not asm). In C, this would be abuse of the language syntax and types, but in x86 assembly pointers and integers are the same thing. Everything is just bytes, and it's up to the program put instructions in the right order to get useful results.
Effective address is a technical term in x86: it means the "offset" part of a seg:off logical address, especially when a base_reg + index*scale + displacement
calculation was needed. e.g. the rax + (rcx<<2)
in a %gs:(%rax,%rcx,4)
addressing mode. (But EA still applies to %rdi
for stosb
, or the absolute displacement for movabs
load/store, or other cases without a ModRM addr mode). Its use in this context doesn't mean it must be a valid / useful memory address, it's telling you that the calculation doesn't involve the segment base so it's not calculating a linear address. (Adding the seg base would make it unusable for actual address math in a non-flat memory model.)
The original designer / architect of 8086's instruction set (Stephen Morse) might or might not have had pointer math in mind as the main use-case, but modern compilers think of it as just another option for doing arithmetic on pointers / integers, and so should humans.
(Note that 16-bit addressing modes don't include shifts, just [BP|BX] + [SI|DI] + disp8/disp16
, so LEA wasn't as useful for non-pointer math before 386. See this Q&A for more about 32/64-bit addressing modes, although that answer uses Intel syntax like [rax + rdi*4]
instead of the AT&T syntax used in this question. x86 machine code is the same regardless of what syntax you use to create it.)
Maybe the 8086 architects did simply want to expose the address-calculation hardware for arbitrary uses because they could do it without using a lot of extra transistors. The decoder already has to be able to decode addressing modes, and other parts of the CPU have to be able to do address calculations. Putting the result in a register instead of using it with a segment-register value for memory access doesn't take many extra transistors. Ross Ridge confirms that LEA on original 8086 reuses the CPUs effective-address decoding and calculation hardware.
Note that most modern CPUs run LEA on the same ALUs as normal add and shift instructions. They have dedicated AGUs (address-generation units), but only use them for actual memory operands. In-order Atom is one exception; LEA runs earlier in the pipeline than the ALUs: inputs have to be ready sooner, but outputs are also ready sooner. Out-of-order execution CPUs (all modern x86) don't want LEA to interfere with actual loads/stores so they run it on an ALU.
lea
has good latency and throughput, but not as good throughput as add
or mov r32, imm32
on most CPUs, so only use lea
when you can save an instructions with it instead of add
. (See Agner Fog's x86 microarch guide and asm optimization manual and https://uops.info/)
Ice Lake improved on that for Intel, now able to run LEA on all four ALU ports.
Rules for which kinds of LEA are "complex", running on fewer of the ports that can handle it, vary by microarchitecture. e.g. 3-component (two + operations) is the slower case on SnB-family, having a scaled index is the lower-throughput case on Ice Lake. Alder Lake E-cores (Gracemont) are 4/clock, but 1/clock when there's an index at all, and 2-cycle latency when there's an index and displacement (whether or not there's a base reg). Zen is slower when there's a scaled index or 3 components. (2c latency and 2/clock down from 1c and 4/clock).
The internal implementation is irrelevant, but it's a safe bet that decoding the operands to LEA shares transistors with decoding addressing modes for any other instruction. (So there is hardware reuse / sharing even on modern CPUs that don't execute lea
on an AGU.) Any other way of exposing a multi-input shift-and-add instruction would have taken a special encoding for the operands.
So 386 got a shift-and-add ALU instruction for "free" when it extended the addressing modes to include scaled-index, and being able to use any register in an addressing mode made LEA much easier to use for non-pointers, too.
x86-64 got cheap access to the program counter (instead of needing to read what call
pushed) "for free" via LEA because it added the RIP-relative addressing mode, making access to static data significantly cheaper in x86-64 position-independent code than in 32-bit PIC. (RIP-relative does need special support in the ALUs that handle LEA, as well as the separate AGUs that handle actual load/store addresses. But no new instruction was needed.)
It's just as good for arbitrary arithmetic as for pointers, so it's a mistake to think of it as being intended for pointers these days. It's not an "abuse" or "trick" to use it for non-pointers, because everything's an integer in assembly language. It has lower throughput than add
, but it's cheap enough to use almost all the time when it saves even one instruction. But it can save up to three instructions:
;; Intel syntax.
lea eax, [rdi + rsi*4 - 8] ; 3 cycle latency on Intel SnB-family
; 2-component LEA is only 1c latency
;;; without LEA:
mov eax, esi ; maybe 0 cycle latency, otherwise 1
shl eax, 2 ; 1 cycle latency
add eax, edi ; 1 cycle latency
sub eax, 8 ; 1 cycle latency
On some AMD CPUs, even a complex LEA is only 2 cycle latency, but the 4-instruction sequence would be 4 cycle latency from esi
being ready to the final eax
being ready. Either way, this saves 3 uops for the front-end to decode and issue, and that take up space in the reorder buffer all the way until retirement.
lea
has several major benefits, especially in 32/64-bit code where addressing modes can use any register and can shift:
lea 1(%rdi), %eax
or lea (%rdx, %rbp), %ecx
.cmovcc
. Or maybe in an add-with-carry loop on CPUs with partial-flag stalls.7-byte lea foo(%rip), %rdi
is slightly larger and slower than mov $foo, %edi
(5 bytes), so prefer mov r32, imm32
in position-dependent code on OSes where symbols are in the low 32 bits of virtual address space, like Linux. You may need to disable the default PIE setting in gcc to use this.
In 32-bit code, mov edi, OFFSET symbol
is similarly shorter and faster than lea edi, [symbol]
. (Leave out the OFFSET
in NASM syntax.) RIP-relative isn't available and addresses fit in a 32-bit immediate, so there's no reason to consider lea
instead of mov r32, imm32
if you need to get static symbol addresses into registers.
Other than RIP-relative LEA in x86-64 mode, all of these apply equally to calculating pointers vs. calculating non-pointer integer add / shifts.
See also the x86 <!--> tag wiki for assembly guides / manuals, and performance info.
Operand-size vs. address-size for x86-64 lea
See also Which 2's complement integer operations can be used without zeroing high bits in the inputs, if only the low part of the result is wanted?. 64-bit address size and 32-bit operand size is the most compact encoding (no extra prefixes), so prefer lea (%rdx, %rbp), %ecx
when possible instead of 64-bit lea (%rdx, %rbp), %rcx
or 32-bit lea (%edx, %ebp), %ecx
.
x86-64 lea (%edx, %ebp), %ecx
is always a waste of an address-size prefix vs. lea (%rdx, %rbp), %ecx
, but 64-bit address / operand size is obviously required for doing 64-bit math. (Agner Fog's objconv disassembler even warns about useless address-size prefixes on LEA with a 32-bit operand-size.)
Except maybe on Ryzen, where Agner Fog reports that 32-bit operand size lea
in 64-bit mode has an extra cycle of latency. I don't know if overriding the address-size to 32-bit can speed up LEA in 64-bit mode if you need it to truncate to 32-bit.
This question is a near-duplicate of the very-highly-voted What's the purpose of the LEA instruction?, but most of the answers explain it in terms of address calculation on actual pointer data. That's only one use.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With