Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

64-bit pointer subtraction, signed integer underflow, and a possible compiler bug?

I recently tore my hair out debugging this piece of code (slightly modified for simplicity of presentation):

char *packedData;
unsigned char* indexBegin, *indexEnd;
int block, row;

// +------ bad! 
// v
  int cRow = std::upper_bound( indexBegin, indexEnd, row&255 ) - indexBegin - 1;

char value = *(packedData + (block + cRow) * bytesPerRow);

Of course, assigning the difference of two pointers (the result of std::upper_bound minus the beginning of the searched array) to an int, rather than a ptrdiff_t, is wrong in a 64-bit environment, but the particular bad behavior that resulted was very unexpected. I'd expect this to fail when the array at [indexBegin, indexEnd) was more than 2GB in size, so that the difference overflowed an int; but what actually happened was a crash when indexBegin and indexEnd had values on opposite sides of 2^31 (i.e. indexBegin = 0x7fffffe0, indexEnd = 0x80000010). Further investigation revealed the following x86-64 assembly code (generated by MSVC++ 2005, with optimizations):

; (inlined code of std::upper_bound, which leaves indexBegin in rbx,
; the result of upper_bound in r9, block at *(r12+0x28), and data at
; *(r12+0x40), immediately precedes this point)
movsxd    rcx, r9d                   ; movsxd?!
movsxd    rax, ebx                   ; movsxd?!
sub       rcx, rax
lea       rdx, [rcx+rdi-1]
movsxd    rax, dword ptr [r12+28h]
imul      rdx, rax
mov       rax, qword ptr [r12+40h]
mov       rcx, byte ptr[rdx+rax]

This code treats the pointers being subtracted as signed, 32-bit values, sign-extending them into 64-bit registers before subtracting them and multiplying the result by another sign-extended 32-bit value, and then indexes another array with the 64-bit result of that computation. Try as I might, I can't figure out under what theory this could ever be correct. Had the pointers been subtracted as 64-bit values, or had there been another instruction, right after the imul, that sign-extended edx into rdx (or had the final mov referenced rax+edx, but I don't think that's available in x86-64), everything would be fine (nominally dangerous, but I happen to know that [indexBegin, indexEnd) will never even approach 2GB in length).

The question is somewhat academic, since my actual bug is easily fixed by just using a 64-bit type to hold the pointer difference, but is this a compiler bug, or is there some obscure part of the language specification that allows the compiler to assume that the operands of a subtraction will individually fit into the result type?

EDIT: the only situation I can think of that would make what the compiler has done okay, is if it's allowed to assume that integer underflows will never happen (so that if I subtract two numbers and assign the result to a signed int, the compiler would be free to actually use a larger signed integral type, which turns out to be wrong in this case). Is that allowed by the language spec?

like image 212
Jonathan Klabunde Tomer Avatar asked Mar 09 '11 15:03

Jonathan Klabunde Tomer


2 Answers

Bit late, but seeing as the question wasn't answered after the last EDIT.

Yes, overflow is Undefined Behavior. And yes, UB can have unintuitive effects. In particular, UB may appear to affect code that has already executed.

The practical consequence is indeed that the compiler is allowed to work on the assumption of no overflow. The classical example is if (x+1<x), a mistaken test for overflow that compilers can and do replace with if (false).

And yes, you can get get quite confusing "overflow" behaviors when your 32 bit variable is actually stored in a 64 bit register, so there is space available to overflow. That register can hold the value 1<<32, which shows how you can't sensibly reason about the results of a C++ program with Undefined Behavior : you effectively have a int with the value MAX_INT+1 (!)

like image 192
MSalters Avatar answered Sep 24 '22 22:09

MSalters


The C++ conversion from pointers to non-boolean types goes like this:

  1. Convert to unsigned integer of equal size to the pointer
  2. Convert from unsigned integer to the destination type (integer in your case)

Now, the compiler sees integer subtraction. It is free to perform this in any manner it sees fit, as long as it preserves the sign. So, Visual-C++ has decided to perform this using the 64-bit registers.

You can verify this operational order by casting your right-hand side to and unsigned int before assigning to your lvalue. This will result in the bad behavior you were expecting.

like image 43
Seth Avatar answered Sep 24 '22 22:09

Seth