Signed integer overflow is undefined in C and C++. But what about signed integer overflow within the individual fields of an __m128i
? In other words, is this behavior defined in the Intel standards?
#include <inttypes.h>
#include <stdio.h>
#include <stdint.h>
#include <emmintrin.h>
union SSE2
{
__m128i m_vector;
uint32_t m_dwords[sizeof(__m128i) / sizeof(uint32_t)];
};
int main()
{
union SSE2 reg = {_mm_set_epi32(INT32_MAX, INT32_MAX, INT32_MAX, INT32_MAX)};
reg.m_vector = _mm_add_epi32(reg.m_vector, _mm_set_epi32(1, 1, 1, 1));
printf("%08" PRIX32 "\n", (uint32_t) reg.m_dwords[0]);
return 0;
}
[myria@polaris tests]$ gcc -m64 -msse2 -std=c11 -O3 sse2defined.c -o sse2defined
[myria@polaris tests]$ ./sse2defined
80000000
Note that the 4-byte-sized fields of an SSE2 __m128i
are considered signed.
There are about three things wrong with this question (not in a down vote sort of way, in a "you are lacking an understanding" kind of way ... which is why I guess you have come here).
1) You are asking about a specific implementation issue (using SSE2) and not about the standard. You've answered your own question "signed integer overflow is undefined in C".
2) When you are dealing with c intrinsics you aren't even programming in C! These are inserting assembly instructions in line. It is doing it in a some what portable way, but it is no longer true that your data is a signed integer. It is a vector type being passed to an SSE intrinsic. YOU are then casting that to an integer and telling C that you want to see the result of that operation. Whatever bytes happen to be there when you cast is what you will see and has nothing to do with signed arithmetic in the C standard.
3) There was only two wrong assumptions. I made an assumption about the number of errors and was wrong.
Things are a bit different if the compiler inserts SSE instructions (say in a loop). Now the compiler is guaranteeing that the result is the same as a signed 32 bit operation ... UNLESS there is undefined behaviour (e.g. an overflow) in which case it can do whatever it likes.
Note also that undefined doesn't mean unexpected ... whatever behaviour your observe for auto-vectorization might be consistent and repeatable (maybe it does always wrap on your machine ... that might not be true with all cases for surrounding code, or all compilers. Or if the compiler selects different instructions depending on availability of SSSE3, SSE4, or AVX*, possibly not even all processors if it makes different code-gen choices for different instruction-sets that do or don't take advantage of signed overflow being UB).
EDIT:
Okay, well now that we are asking about "the Intel standards" (which don't exist, I think you mean the x86 standards), I can add something to my answer. Things are a little bit convoluted.
Firstly, the intrinsic _mm_add_epi32 is defined by Microsoft to match Intel's intrinsics API definition (https://software.intel.com/sites/landingpage/IntrinsicsGuide/ and the intrinsic notes in Intel's x86 assembly manuals). They cleverly define it as doing to a __m128i
the same thing the x86 PADDD
instruction does to an XMM register, with no more discussion (e.g. is it a compile error on ARM or should it be emulated?).
Secondly, PADDD isn't only a signed addition! It is a 32 bit binary add. x86 uses two's complement for signed integers, and adding them is the same binary operation as unsigned base 2. So yes, paddd
is guaranteed to wrap. There is a good reference for all the x86 instructions here.
So what does that mean: again, the assumption in your question is flawed because there isn't even any overflow. So the output you see should be defined behaviour. Note that it is defined by Microsoft and x86 (not by the C Standard).
Other x86 compilers also implement Intel's intrinsics API the same way, so _mm_add_epi32
is portably guaranteed to just wrap.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With