Take a look at this piece of code
int main()
{
int i = 1U << 31; // assume this yields INT_MIN
volatile int x;
x = -1;
x = i / x; //dividing INT_MIN by -1 is UB
return 0;
}
It invokes undefined behavior on typical platform, but the "behavior" is quite different from what I expect -- it acts as if it were an infinite loop. I can think of a scene that it bites.
Of course undefined is undefined, but I have checked the output assembly, it is using a plain idiv
-- why does it not trap? For the sake of comparison, a divide by zero causes an immediate abort.
Using Windows 7 64 bit and MingW64
Can anyone explain this to me?
EDIT
I tried a few options, and the results were always the same.
Here is the assembly:
.file "a.c"
.def __main; .scl 2; .type 32; .endef
.section .text.startup,"x"
.p2align 4,,15
.globl main
.def main; .scl 2; .type 32; .endef
.seh_proc main
main:
subq $56, %rsp
.seh_stackalloc 56
.seh_endprologue
call __main
movl $-1, 44(%rsp)
movl $-2147483648, %eax
movl 44(%rsp), %ecx
cltd
idivl %ecx
movl %eax, 44(%rsp)
xorl %eax, %eax
addq $56, %rsp
ret
.seh_endproc
.ident "GCC: (x86_64-posix-sjlj, built by strawberryperl.com project) 4.8.2"
idiv executes signed division. idiv divides a 16-, 32-, or 64-bit register value (dividend) by a register or memory byte, word, or long (divisor). The size of the divisor (8-, 16- or 32-bit operand) determines the particular register used as the dividend, quotient, and remainder.
The DIV instruction divides unsigned numbers, and IDIV divides signed numbers. Both return a quotient and a remainder.
idiv uses the eax register as a source register. As a result of execution, quotient is stored at eax, and remainder is stored at edx.
The IDIV (signed divide) instruction performs signed integer division, using the same operands as the DIV instruction. For both DIV and IDIV, all of the arithmetic status flags are undefined after the operation. When doing 8-bit division, you must sign-extend the dividend into AH before using IDIV.
The infinite loop you observe is arguably a bug in MinGW-w64.
MinGW-w64 partially supports SEH, and if you run your code in a debugger, you will see that an exception handler (function named "_gnu_exception_handler") is called as a result of the invalid idiv. (for instance run your program in gdb and set a breakpoint on _gnu_exception_handler)
Said simply, what this exception handler does in case of an integer overflow is simply dismissing the exception and continuing execution at the point where the exception occurred (idiv). The idiv operation is then executed again, resulting in the same overflow trigger the same error handler, and your CPU goes back and forth between idiv and the exception handler. (This is where the behavior of MinGW-w64 can be seen as a bug.)
You can see it directly in the source here if you want to go deep.
The value "EXCEPTION_CONTINUE_EXECUTION" returned by _gnu_exception_handler when it deals with an EXCEPTION_INT_OVERFLOW (integer overflow) is what fuels this behavior (when the system sees that the handler returned EXCEPTION_CONTINUE_EXECUTION, it jumps back to the instruction that generated the exception and tries to execute it again.)
If you are interested in more details, here is a good resources to understand how SEH works on Windows.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With