The following results don't make any sense to me. It looks like a negative offset is cast to unsigned before addition or subtraction are performed.
double[] x = new double[1000];
int i = 1; // for the overflow it makes no difference if it is long, int or short
int j = -1;
unsafe
{
fixed (double* px = x)
{
double* opx = px+500; // = 0x33E64B8
//unchecked
//{
double* opx1 = opx+i; // = 0x33E64C0
double* opx2 = opx-i; // = 0x33E64B0
double* opx3 = opx+j; // = 0x33E64B0 if unchecked; throws overflow exception if checked
double* opx4 = opx-j; // = 0x33E64C0 if unchecked; throws overflow exception if checked
//}
}
}
Although it might seem strange to use negative offsets, there are use cases for it. In my case it was reflecting boundary conditions in a two-dimensional array.
Of course, the overflow doesn't hurt too much because I can either use unchecked or move the sign of the value to the operation by inverting and applying it to the modulus of the operand.
But this behaviour seems undocumented. According to MSDN I don't expect negative offsets to be problematic:
You can add a value n of type int, uint, long, or ulong to a pointer, p,of any type except void*. The result p+n is the pointer resulting from adding n * sizeof(p) to the address of p. Similarly, p-n is the pointer resulting from subtracting n * sizeof(p) from the address of p.
This issue was raised multiple times in various forms in roslyn\RyuJIT issue trackers. First time I found you can see here: When adding integer to pointer in checked context add.ovf.un instruction is generated
Indeed, if you look at generated IL you will see that add.ovf.un
("add unsigned integers with overflow check") instruction is emitted in checked context, but not in unchecked context. First operand of this function in our case is unsigned native int (kind of UIntPtr
), representing double*
pointer. Second operand was different at the time of that issue (2015) and nowadays.
At the time of that issue, second operand was Int32
, just as you would expect. However, doing add.ovf.un
with UIntPtr
and Int32
behaves differently in x86 and x64. In x86 it throws overflow exception (for negatives) because, well, second operand is negative. However, in x64, JIT will zero-extend that Int32
to 64 bit (because native pointer is now 64-bit). It will zero-extend it because it assumes it's unsigned. But zero-extension of negative Int32
will result in big positive 64-bit integer.
In result, if you add negative Int32
to pointer in x64, at the time of the above issue, it will not throw overflow exception but instead will add wrong value to the pointer, which is of course much worse.
Issue was closed with "won't fix":
Thanks for the detailed report here!
Given the narrow scope of the bug though and that the behavior is consistent with the native compiler we are "won't fixing" the bug at this time.
However, people were not quite happy with the described behaviour for x64, with which one can silently produce pointers to unknown locations without realizing that. After long debates, this problem was kind of solved in 2017 as part of this issue.
Fix was to force cast Int32
to IntPtr
, when you add it to pointer, in checked context. That is done to prevent automatic extension of that Int32
described above, in x64.
So if you now look at generated IL for your case, you will see that before passing to add.ovf.un
, Int32
is now casted to IntPtr
with conv.i
IL instruction. This causes adding negative integer to pointer in checked context to always throw overflow exception, on both x86 and x64.
In any case, original issue of emitting add.ovf.un
for pointer addition in checked context is not solved, and most likely will not be solved, as it was closed as "won't fix", so you have to be aware of that and decide yourself how you can overcome this in your specific scenario.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With