I was looking through the disassmbly of my program (because it crashed), and noticed lots of
xchg ax, ax
I googled it and found out it's essentially a nop, but why does visual studio do an xchg instead of a noop?
The application is a C# .NET3.5 64-bit application, compiled by visual studio
Actually, xchg ax,ax is just how MS disassembles "66 90". 66 is the operand size override, so it supposedly operates on ax instead of eax . However, the CPU still executes it as a nop. The 66 prefix is used here to make the instruction two bytes in size, usually for alignment purposes.
In 8085 Instruction set, there is one mnemonic XCHG, which stands for eXCHanGe. This is an instruction to exchange contents of HL register pair with DE register pair. This instruction uses implied addressing mode. As it is1-Byte instruction, so It occupies only 1-Byte in the memory.
On x86 the NOP
instruction is XCHG AX, AX
The 2 mnemonic instructions assemble to the same binary op-code. (Actually, I suppose an assembler could use any xchg
of a register with itself, but AX
or EAX
is what's typically used for the nop
as far as I know).
xchg ax, ax
has the properties of changing no register values and changing no flags (hey - it's a no op!).
Edit (in response to a comment by Anon.):
Oh right - now I remember there are several encodings for the xchg
instruction. Some take a mod/r/m set of bits (like many Intel x86 architecture instructions) that specify a source and destination. Those encodings take more than one byte. There's also a special encoding that uses a single byte and exchanges a general purpose register with (E)AX
. If the specified register is also (E)AX
then you have a single-byte NOP instruction. you can also specify that (E)AX
be exchanged with itself using the larger variant of the xchg
instruction.
I'm guessing that MSVC uses the multiple byte version of xchg
with (E)AX
as the source and destination when it wants to chew up more than one byte for no operation - it takes the same number of cycles as the single byte xchg
, but uses more space. In the disassembly you won't see the multiple byte xchg
decoded as a NOP
, even if the result is the same.
Specifically xchg eax, eax
or nop
could be encoded as opcodes 0x90
or 0x87 0xc0
depending on whether you want it to use up 1 or 2 bytes. The Visual Studio disassembler (and probably others) will decode the opcode 0x90
as the NOP
instruction and will decode opcode 0x87 0xc0
as xchg eax, eax
.
It's been a while since I've done detailed assembly language work, so chances are I'm wrong on at least one count here...
xchg ax,ax
and nop
are actually the same instruction, they map to the same opcode (0x90 iirc). That's fine, xchg ax,ax
is a No-Op. Why should one waste extra opcode encodings with instructions that don't do anything?
Questionable is why you see both mnemonics printed. I guess it's just a flaw in your disassembly, there is no binary difference.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With