Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why is this assembly code faster?

I'm experimenting with a lexer, and I found that switching from a while-loop to an if-statement and a do-while-loop in one part of the program led to ~20% faster code, which seemed crazy. I isolated the difference in the compiler generated code to these assembly snippets. Does anyone know why the fast code is faster?

In the assembly, 'edi' is the current text position, 'ebx' is the end of the text, and 'isAlpha' is a lookup table that has a 1 if the character is alphabetic, and 0 otherwise.

The slow code:

slow_loop:
00401897  cmp   edi,ebx 
00401899  je    slow_done (4018AAh) 
0040189B  movzx eax,byte ptr [edi] 
0040189E  cmp   byte ptr isAlpha (4533E0h)[eax],0 
004018A5  je    slow_done (4018AAh) 
004018A7  inc   edi  
004018A8  jmp   slow_loop (401897h) 
slow_done:

The fast code:

fast_loop:
0040193D  inc   edi  
0040193E  cmp   edi,ebx 
00401940  je    fast_done (40194Eh) 
00401942  movzx eax,byte ptr [edi] 
00401945  cmp   byte ptr isAlpha (4533E0h)[eax],0 
0040194C  jne   fast_loop (40193Dh) 
fast_done:

If I run just these assembly snippets against a megabyte of text consisting only of the letter 'a', the fast code is 30% faster. My guess is the slow code is slow because of branch misprediction, but I thought in a loop that'd be a one time cost.

Here's the program that I used to test both snippets:

#include <Windows.h>
#include <string>
#include <iostream>

int main( int argc, char* argv[] )
{
    static char isAlpha[256];
    for ( int i = 0; i < sizeof( isAlpha ); ++i )
        isAlpha[i] = isalpha( i ) ? 1 : 0;

    std::string test( 1024*1024, 'a' );

    const char* start = test.c_str();
    const char* limit = test.c_str() + test.size();

    DWORD slowStart = GetTickCount();
    for ( int i = 0; i < 10000; ++i )
    {
        __asm
        {
            mov edi, start
            mov ebx, limit

            inc edi

        slow_loop:
            cmp   edi,ebx
            je    slow_done
            movzx eax,byte ptr [edi]
            cmp   byte ptr isAlpha [eax],0
            je    slow_done
            inc   edi
            jmp   slow_loop

        slow_done:
        }
    }
    DWORD slowEnd = GetTickCount();
    std::cout << "slow in " << ( slowEnd - slowStart ) << " ticks" << std::endl;

    DWORD fastStart = GetTickCount();
    for ( int i = 0; i < 10000; ++i )
    {
        __asm
        {
            mov edi, start
            mov ebx, limit

        fast_loop:
            inc   edi
            cmp   edi,ebx
            je    fast_done
            movzx eax,byte ptr [edi]
            cmp   byte ptr isAlpha [eax],0
            jne   fast_loop

        fast_done:
        }
    }
    DWORD fastEnd = GetTickCount();
    std::cout << "fast in " << ( fastEnd - fastStart ) << " ticks" << std::endl;

    return 0;
}

The output of the test program is

slow in 8455 ticks
fast in 5694 ticks
like image 675
briangreenery Avatar asked Jun 28 '12 10:06

briangreenery


1 Answers

Sorry, I was not able to reproduce your code exactly on GCC (linux), but I have some results and I think that main idea was saved in my code.

There is a tool from Intel to analyse code fragment performance: http://software.intel.com/en-us/articles/intel-architecture-code-analyzer/ (Intel IACA). It is free to download and test it.

In my experiment, report for slow loop:

Intel(R) Architecture Code Analyzer Version - 2.0.1
Analyzed File - ./l2_i
Binary Format - 32Bit
Architecture  - SNB
Analysis Type - Throughput

Throughput Analysis Report
--------------------------
Block Throughput: 3.05 Cycles       Throughput Bottleneck: Port5

Port Binding In Cycles Per Iteration:
-------------------------------------------------------------------------
|  Port  |  0   -  DV  |  1   |  2   -  D   |  3   -  D   |  4   |  5   |
-------------------------------------------------------------------------
| Cycles | 0.5    0.0  | 0.5  | 1.0    1.0  | 1.0    1.0  | 0.0  | 3.0  |
-------------------------------------------------------------------------

N - port number or number of cycles resource conflict caused delay, DV - Divide
D - Data fetch pipe (on ports 2 and 3), CP - on a critical path
F - Macro Fusion with the previous instruction occurred

| Num Of |              Ports pressure in cycles               |    |
|  Uops  |  0  - DV  |  1  |  2  -  D  |  3  -  D  |  4  |  5  |    |
---------------------------------------------------------------------
|   1    |           |     |           |           |     | 1.0 | CP | cmp edi,
|   0F   |           |     |           |           |     |     |    | jz 0xb
|   1    |           |     | 1.0   1.0 |           |     |     |    | movzx ebx
|   2    |           |     |           | 1.0   1.0 |     | 1.0 | CP | cmp cl, b
|   0F   |           |     |           |           |     |     |    | jz 0x3
|   1    | 0.5       | 0.5 |           |           |     |     |    | inc edi
|   1    |           |     |           |           |     | 1.0 | CP | jmp 0xfff

For fast loop:

Throughput Analysis Report
--------------------------
Block Throughput: 2.00 Cycles       Throughput Bottleneck: Port5

Port Binding In Cycles Per Iteration:
-------------------------------------------------------------------------
|  Port  |  0   -  DV  |  1   |  2   -  D   |  3   -  D   |  4   |  5   |
-------------------------------------------------------------------------
| Cycles | 0.5    0.0  | 0.5  | 1.0    1.0  | 1.0    1.0  | 0.0  | 2.0  |
-------------------------------------------------------------------------

N - port number or number of cycles resource conflict caused delay, DV - Divide
D - Data fetch pipe (on ports 2 and 3), CP - on a critical path
F - Macro Fusion with the previous instruction occurred

| Num Of |              Ports pressure in cycles               |    |
|  Uops  |  0  - DV  |  1  |  2  -  D  |  3  -  D  |  4  |  5  |    |
---------------------------------------------------------------------
|   1    | 0.5       | 0.5 |           |           |     |     |    | inc edi
|   1    |           |     |           |           |     | 1.0 | CP | cmp edi,
|   0F   |           |     |           |           |     |     |    | jz 0x8
|   1    |           |     | 1.0   1.0 |           |     |     |    | movzx ebx
|   2    |           |     |           | 1.0   1.0 |     | 1.0 | CP | cmp cl, b
|   0F   |           |     |           |           |     |     |    | jnz 0xfff

So in slow loop JMP is an extra instruction in Critical Path. All pairs of cmp+jz/jnz are merged (Macro-fusion) into single u-op. And in my implementation of code the critical resource is Port5, which can execute ALU+JMP (and it is the only port with JMP capability).

PS: If somebody has no idea about where ports are located, there are pictures first second; and article: rwt

PPS: IACA has some limitations; it models only some part of CPU (Execution units), and doesn't account cache misses, branch mispredictions, different penalties, frequency/power changes, OS interrupts, HyperThreading contention for Execution units and many other effects. But it is useful tool because it can give you some quick look inside the most internal core of modern Intel CPU. And it only works for inner loops (just like the loops in this question).

like image 90
osgx Avatar answered Sep 20 '22 06:09

osgx