Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

For loop performance difference, and compiler optimization


I chose David's answer because he was the only one to present a solution to the difference in the for-loops with no optimization flags. The other answers demonstrate what happens when setting the optimization flags on.


Jerry Coffin's answer explained what happens when setting the optimization flags for this example. What remains unanswered is why superCalculationA runs slower than superCalculationB, when B performs one extra memory reference and one addition for each iteration. Nemo's post shows the assembler output. I confirmed this compiling with the -S flag on my PC, 2.9GHz Sandy Bridge (i5-2310), running Ubuntu 12.04 64-bit, as suggested by Matteo Italia.


I was experimenting with for-loops performance when I stumbled upon the following case.

I have the following code that does the same computation in two different ways.

#include <cstdint>
#include <chrono>
#include <cstdio>

using std::uint64_t;

uint64_t superCalculationA(int init, int end)
{
    uint64_t total = 0;
    for (int i = init; i < end; i++)
        total += i;
    return total;
}

uint64_t superCalculationB(int init, int todo)
{
    uint64_t total = 0;
    for (int i = init; i < init + todo; i++)
        total += i;
    return total;
}

int main()
{
    const uint64_t answer = 500000110500000000;

    std::chrono::time_point<std::chrono::high_resolution_clock> start, end;
    double elapsed;

    std::printf("=====================================================\n");

    start = std::chrono::high_resolution_clock::now();
    uint64_t ret1 = superCalculationA(111, 1000000111);
    end = std::chrono::high_resolution_clock::now();
    elapsed = (end - start).count() * ((double) std::chrono::high_resolution_clock::period::num / std::chrono::high_resolution_clock::period::den);
    std::printf("Elapsed time: %.3f s | %.3f ms | %.3f us\n", elapsed, 1e+3*elapsed, 1e+6*elapsed);

    start = std::chrono::high_resolution_clock::now();
    uint64_t ret2 = superCalculationB(111, 1000000000);
    end = std::chrono::high_resolution_clock::now();
    elapsed = (end - start).count() * ((double) std::chrono::high_resolution_clock::period::num / std::chrono::high_resolution_clock::period::den);
    std::printf("Elapsed time: %.3f s | %.3f ms | %.3f us\n", elapsed, 1e+3*elapsed, 1e+6*elapsed);

    if (ret1 == answer)
    {
        std::printf("The first method, i.e. superCalculationA, succeeded.\n");
    }
    if (ret2 == answer)
    {
        std::printf("The second method, i.e. superCalculationB, succeeded.\n");
    }

    std::printf("=====================================================\n");

    return 0;
}

Compiling this code with

g++ main.cpp -o output --std=c++11

leads to the following result:

=====================================================
Elapsed time: 2.859 s | 2859.441 ms | 2859440.968 us
Elapsed time: 2.204 s | 2204.059 ms | 2204059.262 us
The first method, i.e. superCalculationA, succeeded.
The second method, i.e. superCalculationB, succeeded.
=====================================================

My first question is: why is the second loop running 23% faster than the first?

On the other hand, if I compile the code with

g++ main.cpp -o output --std=c++11 -O1

The results improve a lot,

=====================================================
Elapsed time: 0.318 s | 317.773 ms | 317773.142 us
Elapsed time: 0.314 s | 314.429 ms | 314429.393 us
The first method, i.e. superCalculationA, succeeded.
The second method, i.e. superCalculationB, succeeded.
=====================================================

and the difference in time almost disappears.

But I could not believe my eyes when I set the -O2 flag,

g++ main.cpp -o output --std=c++11 -O2

and got this:

=====================================================
Elapsed time: 0.000 s | 0.000 ms | 0.328 us
Elapsed time: 0.000 s | 0.000 ms | 0.208 us
The first method, i.e. superCalculationA, succeeded.
The second method, i.e. superCalculationB, succeeded.
=====================================================

So, my second question is: What is the compiler doing when I set -O1 and -O2 flags that leads to this gigantic performance improvement?

I checked Optimized Option - Using the GNU Compiler Collection (GCC), but that did not clarify things.


By the way, I am compiling this code with g++ (GCC) 4.9.1.


EDIT to confirm Basile Starynkevitch's assumption

I edited the code, now main looks like this:

int main(int argc, char **argv)
{
    int start = atoi(argv[1]);
    int end   = atoi(argv[2]);
    int delta = end - start + 1;

    std::chrono::time_point<std::chrono::high_resolution_clock> t_start, t_end;
    double elapsed;

    std::printf("=====================================================\n");

    t_start = std::chrono::high_resolution_clock::now();
    uint64_t ret1 = superCalculationB(start, delta);
    t_end = std::chrono::high_resolution_clock::now();
    elapsed = (t_end - t_start).count() * ((double) std::chrono::high_resolution_clock::period::num / std::chrono::high_resolution_clock::period::den);
    std::printf("Elapsed time: %.3f s | %.3f ms | %.3f us\n", elapsed, 1e+3*elapsed, 1e+6*elapsed);

    t_start = std::chrono::high_resolution_clock::now();
    uint64_t ret2 = superCalculationA(start, end);
    t_end = std::chrono::high_resolution_clock::now();
    elapsed = (t_end - t_start).count() * ((double) std::chrono::high_resolution_clock::period::num / std::chrono::high_resolution_clock::period::den);
    std::printf("Elapsed time: %.3f s | %.3f ms | %.3f us\n", elapsed, 1e+3*elapsed, 1e+6*elapsed);

    std::printf("Results were %s\n", (ret1 == ret2) ? "the same!" : "different!");
    std::printf("=====================================================\n");

    return 0;
}

These modifications really increased computation time, both for -O1 and -O2. Both are giving me around 620 ms now. Which proves that -O2 was really doing some computation at compile time.

I still do not understand what these flags are doing to improve performance, and -Ofast does even better, at about 320ms.

Also notice that I have changed the order in which functions A and B are called to test Jerry Coffin's assumption. Compiling this code with no optimizer flags still gives me around 2.2 secs in B and 2.8 secs in A. So I figure that it is not a cache thing. Just reinforcing that I am not talking about optimization in the first case (the one with no flags), I just want to know what makes the seconds loop run faster than the first.

like image 596
jcmonteiro Avatar asked Aug 29 '14 21:08

jcmonteiro


1 Answers

My immediate guess would be that the second is faster, not because of the changes you made to the loop, but because it's second, so the cache is already primed when it runs.

To test the theory, I re-arranged your code to reverse the order in which the two calculations were called:

#include <cstdint>
#include <chrono>
#include <cstdio>

using std::uint64_t;

uint64_t superCalculationA(int init, int end)
{
    uint64_t total = 0;
    for (int i = init; i < end; i++)
        total += i;
    return total;
}

uint64_t superCalculationB(int init, int todo)
{
    uint64_t total = 0;
    for (int i = init; i < init + todo; i++)
        total += i;
    return total;
}

int main()
{
    const uint64_t answer = 500000110500000000;

    std::chrono::time_point<std::chrono::high_resolution_clock> start, end;
    double elapsed;

    std::printf("=====================================================\n");

    start = std::chrono::high_resolution_clock::now();
    uint64_t ret2 = superCalculationB(111, 1000000000);
    end = std::chrono::high_resolution_clock::now();
    elapsed = (end - start).count() * ((double) std::chrono::high_resolution_clock::period::num / std::chrono::high_resolution_clock::period::den);
    std::printf("Elapsed time: %.3f s | %.3f ms | %.3f us\n", elapsed, 1e+3*elapsed, 1e+6*elapsed);

    start = std::chrono::high_resolution_clock::now();
    uint64_t ret1 = superCalculationA(111, 1000000111);
    end = std::chrono::high_resolution_clock::now();
    elapsed = (end - start).count() * ((double) std::chrono::high_resolution_clock::period::num / std::chrono::high_resolution_clock::period::den);
    std::printf("Elapsed time: %.3f s | %.3f ms | %.3f us\n", elapsed, 1e+3*elapsed, 1e+6*elapsed);

    if (ret1 == answer)
    {
        std::printf("The first method, i.e. superCalculationA, succeeded.\n");
    }
    if (ret2 == answer)
    {
        std::printf("The second method, i.e. superCalculationB, succeeded.\n");
    }

    std::printf("=====================================================\n");

    return 0;
}

The result I got was:

=====================================================
Elapsed time: 0.286 s | 286.000 ms | 286000.000 us
Elapsed time: 0.271 s | 271.000 ms | 271000.000 us
The first method, i.e. superCalculationA, succeeded.
The second method, i.e. superCalculationB, succeeded.
=====================================================

So, when version A runs first, it's slower. When version B run's first, it's slower.

To confirm, I added an extra call to superCalculationB before doing the timing on either version A or B. After that, I tried running the program three times. For those three runs, I'd judge the results a tie (version A was faster once and version B was faster twice, but neither won dependably nor by a wide enough margin to be meaningful).

That doesn't prove that it's actually a cache situation as such, but does give a pretty strong indication that it's a matter of the order in which the functions are called, not the difference in the code itself.

As far as what the compiler does to make the code faster: the main thing it does is unroll a few iterations of the loop. We can get pretty much the same effect if we unroll a few iterations by hand:

uint64_t superCalculationC(int init, int end)
{
    int f_end = end - ((end - init) & 7);

    int i;
    uint64_t total = 0;
    for (i = init; i < f_end; i += 8) {
        total += i;
        total += i + 1;
        total += i + 2;
        total += i + 3;
        total += i + 4;
        total += i + 5;
        total += i + 6;
        total += i + 7;
    }

    for (; i < end; i++)
        total += i;

    return total;
}

This has a property that some might find rather odd: it's actually faster when compiled with -O2 than with -O3. When compiled with -O2, it's also about five times faster than either of the other two are when compiled with -O3.

The primary reason for the ~5x speed gain compared to the compiler's loop unrolling is that we've unrolled the loop somewhat differently (and more intelligently, IMO) than the compiler does. We compute f_end to tell us how many times the unrolled loop should execute. We execute those iterations, then we execute a separate loop to "clean up" any odd iterations at the end.

The compiler instead generates code that's roughly equivalent to something like this:

for (i = init; i < end; i += 8) {
    total += i;
    if (i + 1 >= end) break;
    total += i + 1;
    if (i + 2 >= end) break;
    total += i + 2;
    // ...
}

Although this is quite a bit faster than when the loop hasn't been unrolled at all, it's quite a bit faster still to eliminate those extra checks from the main loop, and execute a separate loop for any odd iterations.

Given such a trivial loop body being executed such a large number of times, you can also improve speed (when compiled with -O2) still further by unrolling more iterations of the loop. With 16 iterations unrolled, it was about twice as fast as the code above with 8 iterations unrolled:

uint64_t superCalculationC(int init, int end)
{
    int first_end = end - ((end - init) & 0xf);

    int i;
    uint64_t total = 0;
    for (i = init; i < first_end; i += 16) {
        total += i + 0;
        total += i + 1;
        total += i + 2;

        // code for `i+3` through `i+13` goes here

        total += i + 14;
        total += i + 15;
    }

    for (; i < end; i++)
        total += i;

    return total;
}

I haven't tried to explore the limit of gains from unrolling this particular loop, but unrolling 32 iterations nearly doubles the speed again. Depending on the processor you're using, you might get some small gains by unrolling 64 iterations, but I'd guess we're starting to approach the limits--at some point, performance gains will probably level off, then (if you unroll still more iterations) probably drop off, quite possibly dramatically.

Summary: with -O3 the compiler unrolls a number of iterations of the loop. This is extremely effective in this case, primarily because we have many executions of nearly the most trivial possible loop body. Unrolling the loop by hand is even more effective than letting the compiler do it--we can unroll more intelligently, and we can simply unroll more iterations than the compiler does. The extra intelligence can give us an improvement of around 5:1, and the extra iterations another 4:1 or so1 (at the expense of somewhat longer, slightly less readable code).

Final caveat: as always with optimization, your mileage may vary. Differences in compilers and/or processors mean you're likely to get at least somewhat different results than I did. I'd expect my hand-unrolled loop to be substantially faster than the other two in most cases, but exactly how much faster is likely to vary.


1. But note that this is comparing the hand-unrolled loop with -O2 to the original loop with -O3. When compiled with -O3, the hand-unrolled loop runs much more slowly.

like image 121
Jerry Coffin Avatar answered Oct 15 '22 21:10

Jerry Coffin