Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why are multiplications by constant signed integer fractions not optimised?

Since signed integer overflow is undefined behaviour I expected the three functions below to compile to the same or similar assembly. This is however not the case. test2 is slightly different from test1 and test3 uses two imul instructions that are not required for the other examples.

int test1(int x)
{
    return x * 5 / 2;
}

int test2(int x)
{
    return x * 10 / 4;
}

int test3(int x)
{
    return x * 50 / 20;
}

comparison on compiler explorer

Is there a reason why compilers don't perform this kind of optimizaion?

like image 318
xibu Avatar asked Jun 20 '20 15:06

xibu


People also ask

How can division cause integer overflows?

Division. Division is between two operands of arithmetic type. Overflow can occur during two's complement signed integer division when the dividend is equal to the minimum (negative) value for the signed integer type and the divisor is equal to −1 . Division operations are also susceptible to divide-by-zero errors.

What is signed integer overflow?

"Signed integer overflow" means that you tried to store a value that's outside the range of values that the type can represent, and the result of that operation is undefined (in this particular case, your program halts with an error).


1 Answers

Whether such an optimization would be correct would depend upon what if anything an implementation guarantees about the effects of integer overflow:

  1. If an implementation guarantees that integer additions and multiplications will always behave as though performed with values large enough to hold the result and then two's-complement truncated to the type's size after each operation, replacement of x*(m*a)/(m*b) with x*a/b would violate such guarantees.
  2. If an implementation guarantees that integer addition and multiplication will always behave as though they yield some number, but does not guarantee that temporary values will be truncated to any particular size, such an optimization would be valid.
  3. If an implementation guarantees nothing about the effects of overflow, and requires that programmers avoid them at all costs, because they may cause erroneous behavior even in parts of the program that don't use the result, such an optimization would be valid.

The gcc compiler offers options to either uphold the strong guarantee option #1, or refrain from offering any guarantees whatsoever, and the possibility of integer overflows disrupting parts of the code which don't use the result isn't merely theoretical. Because option #3 would be recklessly dangerous for programs that would receive input from potentially untrustworthy sources, and because gcc doesn't offer any settings between #1 and #3, many programs, including those where option #2 would have met requirements, are built with the fwrapv flag which forces option #1.

like image 147
supercat Avatar answered Oct 05 '22 09:10

supercat