Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Logical operations on multiple modulus operations optimised?

It is easy to see that:

(i % 3 == 0) && (i % 5 == 0)

Can be simplified to:

(i % 15 == 0)

Yet looking into the output of GCC, it seems this is not done even at high optimisation levels.

Do any compilers do these sorts of optimisations, or is there a good reason why those two tests are not semantically equivalent?

Edit: In response to those who say this is a fringe case, the following is a similar case:

(i < 3) && (i < 5)

Any number less than 3, must always be less than 5. Second test is redundant.

I would also like to add the following in response to the answer that the compiler cannot know if the environment is affected... Look at this code:

void foo(void)
{
    int i;
    for (i = 0; i <= 10; i++)
    {
        if (i > 20)
        {
            puts("Hi");
        }
    }
}

Whole function is reduced to "repz ret" by GCC with -O2. That is far more complex than anything I am talking about.

like image 413
Matt Avatar asked Apr 18 '12 18:04

Matt


3 Answers

Ignore all the silly answers claiming this is terribly difficult/impossible for a compiler to do. I see no reason why it would be difficult, but presumably either nobody thought of doing it or thought it would be sufficiently important to optimize. If you want a better answer than this you'll need to report it on the GCC bug tracker as a request for enhancement and see what the developers say.

like image 126
R.. GitHub STOP HELPING ICE Avatar answered Sep 28 '22 14:09

R.. GitHub STOP HELPING ICE


Your example is rather simple and is indeed easy to condense into a single operation. The general case of a statement like this, though, is not as simple. Take the following, for example:

((++i) % 3 == 0) && ((++i) % 5 == 0)

This variation cannot be simplified down to a single modulus operation as easily (I know this statement has all sorts of other problems with it, but the point is that the problem isn't as simple when you're using anything other than a simple variable reference). Compilers generally won't look to see that your case involves only simple variables and optimize it differently than the general case; they try to keep optimizations consistent and predictable whenever possible.

Update: The extra cases that you added to your question fall into a completely different class of optimization than your original code snippet. They are both optimized away because they are unreachable code, and can be proven as such at compile-time. Your original question involved re-writing two operations into a single, equivalent operation that is unique from the original. The two snippets you added don't re-write existing code, they only remove code that can never be executed. Compilers are typically very good at identifying and removing dead code.

The optimization that you are seeking is a form of mathematical strength reduction. Most compilers offer some level of MSR optimizations, although how detailed they get will vary based on the compiler and the capabilities of the target platform. For example, a compiler targeting a CPU architecture that does not have a modulus instruction (meaning a potentially-lengthy library function has to be used instead) may optimize statements like these more aggressively. If the target CPU has hardware modulus support, the compiler writer might deem the two or three saved instructions to be too small of a benefit to be worth the time and effort it would take to implement and test the optimization improvements. I have seen this in the past with certain operations that can be done in a single instruction on an x86 CPU (for example) but would require dozens of instructions on a RISC CPU.

like image 41
bta Avatar answered Sep 26 '22 14:09

bta


Honestly, that's a very specific case. If any part of the equation is changed, the optimisation would no longer apply. I think it's a question of costs vs gains, and the cost of implementing this optimisation (increased compile time, increased maintenance difficulty) outweigh the benefits (a few less fast ops in rare circumstances).

In general, mathematical refactoring cannot be applied due to precision limitation and the possibility of overflow. Though I don't think it's an issue here.

like image 33
ikegami Avatar answered Sep 27 '22 14:09

ikegami