For example, given a hex: 83 E4 F0
By looking at the intel developer's manual, I can figure out that 83
means and
and FO
means the -16
. Looking at E4
, I can decode that the source/destination register is either SP or ESP.
Therefore, I can conclude that the hex means either and $-16, %ESP
or and $-16, %SP
. However, in the manual, both of those are listed as 83 /4 ib
.
How can I differentiate between those two?
As harold says, the default operand size is not encoded in the instruction but depends on the current processor mode.
In real mode and 16-bit protected mode, the default operand size is 16-bit, so 83 E4 F0
decodes to and $-16, %sp
.
In 32-bit mode operand size defaults to 32-bit, so it's and $-16, %esp
.
In x64 mode, most instructions again default to 32-bit operand size (except branches and those that indirectly use the stack, such as pushes, pops, calls and returns), so it again decodes to and $-16, %esp
.
It is possible to override the default operand size using prefixes. For example, prefix 66h switches between 32-bit and 16-bit operand size, so 66 83 E4 F0
decodes to and $-16, %esp
in 16-bit mode and to and $-16, %sp
in 32-bit or 64-bit mode. To get 64-bit operand size, you need to use the REX prefix with the W bit set, so 48 83 E4 F0
decodes to and $-16, %rsp
(but only in 64-bit mode!).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With