Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is the fastest way for a multithread SIMD operation explicitly?

Using intrinsics is a common method for SIMDizing. For example, I can perform a single add instruction on eight integers by _mm256_add_epi32. It needs two _mm256_load_si256 and one _mm256_store_si256 after addition as follows:

__m256i vec1 = _mm256_load_si256((__m256i *)&A[0]); // almost 5 cycles
__m256i vec2 = _mm256_load_si256((__m256i *)&B[0]); // almost 5 cycles
__m256i vec3 = _mm256_add_epi32( vec1 , vec2); // almost 1 cycle
_mm256_store_si256((__m256i *)&C[0], vec3); // almost 5

It perform the instructions on the single core of the CPU. My Core i7 has 8 core (4 real); I want to send the operations to all cores like this:

int i_0, i_1, i_2, i_3, i_4, i_5, i_6, i_7 ; // These specify the values in memory
//core 0
__m256i vec1_0 = _mm256_load_si256((__m256i *)&A[i_0]);  
__m256i vec2_0 = _mm256_load_si256((__m256i *)&B[i_0]); 
__m256i vec3_0 = _mm256_add_epi32( vec1 , vec2); 
_mm256_store_si256((__m256i *)&C[i_0], vec3_0);

//core 1
__m256i vec1_1 = _mm256_load_si256((__m256i *)&A[i_1]);
__m256i vec2_1 = _mm256_load_si256((__m256i *)&B[i_1]);
__m256i vec3_1 = _mm256_add_epi32( vec1 , vec2);
_mm256_store_si256((__m256i *)&C[i_1], vec3_1);

//core 2
__m256i vec1_2 = _mm256_load_si256((__m256i *)&A[i_2]);
__m256i vec2_2 = _mm256_load_si256((__m256i *)&B[i_2]);
__m256i vec3_2 = _mm256_add_epi32( vec1 , vec2);
_mm256_store_si256((__m256i *)&C[i_2], vec3_2);

//core 3
__m256i vec1_3 = _mm256_load_si256((__m256i *)&A[i_3]);
__m256i vec2_3 = _mm256_load_si256((__m256i *)&B[i_3]);
__m256i vec3_3 = _mm256_add_epi32( vec1 , vec2);
_mm256_store_si256((__m256i *)&C[i_3], vec3_3);

//core 4
__m256i vec1_4 = _mm256_load_si256((__m256i *)&A[i_4]);
__m256i vec2_4 = _mm256_load_si256((__m256i *)&B[i_4]);
__m256i vec3_4 = _mm256_add_epi32( vec1 , vec2);
_mm256_store_si256((__m256i *)&C[i_4], vec3_4);

//core 5
__m256i vec1_5 = _mm256_load_si256((__m256i *)&A[i_5]);
__m256i vec2_5 = _mm256_load_si256((__m256i *)&B[i_5]);
__m256i vec3_5 = _mm256_add_epi32( vec1 , vec2);
_mm256_store_si256((__m256i *)&C[i_5, vec3_5);

//core 6
__m256i vec1_6 = _mm256_load_si256((__m256i *)&A[i_6]);
__m256i vec2_6 = _mm256_load_si256((__m256i *)&B[i_6]);
__m256i vec3_6 = _mm256_add_epi32( vec1 , vec2);
_mm256_store_si256((__m256i *)&C[i_6], vec3_6);

//core 7
__m256i vec1_7 = _mm256_load_si256((__m256i *)&A[i_7]);
__m256i vec2_7 = _mm256_load_si256((__m256i *)&B[i_7]);
__m256i vec3_7 = _mm256_add_epi32( vec1 , vec2);
_mm256_store_si256((__m256i *)&C[i_7], vec3_7);

POSIX Thread is available and also openMP could be useful in this case too. But, creating and maintaining the threads take too much time compared to almost 5+5+1 cyles for this operation. Because, all data are dependent so I don't need to watch the shared memory. What is the fastest explicit method for implementing this operation?

I work on GPPs thus, GPUs might not be the answer. I also want to implement a library so compiler base solution might be a challenger. The problem is big enough for multi threading. It's for my reasearches therefore I can change the problem to fit the concept. I want to implement a library and compare it with other solutions such as OpenMP and hopefully my library will be faster than other current solutions. GCC 6.3/clang 3.8, Linux Mint, Skylake

Thanks in advance.

like image 435
Hossein Amiri Avatar asked Feb 05 '23 09:02

Hossein Amiri


1 Answers

If you problem is large, you must multithread.

You can choose either openmp or pthread, they will give you similar performance levels (probably a bit better with pthread, but that will be non portable and way more complex to maintain).

Your code will be limited by bandwidth, absolutely not by compute.

In order to achieve maximum throughput, you need to interleave independant memory operations through multi-threading.

a very simple solution like

extern "C" void add(int* a, int* b, int* c, int N) {
    #pragma omp parallel for
    for(int i = 0; i < N; ++i) {
        a[i] = b[i] + c[i];
    }
}

will likely give you acceptable performances on all systems with every compilers.

As a matter of fact, letting the compiler optimize will likely give you good performances, and for sure help you to write readable code.

But sometimes, even the best compiler doesn't give satisfactory results (always inspect your assembly on performance critical sections).

They need help, and sometimes you need to write assembly on your own.

Here is the path I would follow to optimize this loop until I get the results I want.

First, there are classic optimization tricks you can implement:

  1. constness and aliasing

Provide constness and prevent aliasing through __restrict keyword:

extern "C" void add(int* __restrict a, const int* __restrict b, const int* __restrict c, int N) {
    #pragma omp parallel for
    for(int i = 0; i < N; ++i) {
        a[i] = b[i] + c[i];
    }
}

This will help the compiler, since it will know that a, b, and c can't alias each other.

  1. alignement information:

Tell the compiler your pointers are properly aligned

#define RESTRICT __restrict

    typedef __attribute__((aligned(32))) int* intptr;

    extern "C" void add(intptr RESTRICT a, const intptr RESTRICT b, const intptr RESTRICT c, int N) {
        #pragma omp parallel for
        for(int i = 0; i < N; ++i) {
            a[i] = b[i] + c[i];
        }
    }

This will also help the compiler to generate vload instruction instead of vloadu (load unaligned).

  1. Unroll inner loops (if you can):

If you know your problem size if a multiple of 256 bits, you can even unroll an inner loop:

#define RESTRICT __restrict

typedef __attribute__((aligned(32))) int* intptr;

extern "C" void add(intptr RESTRICT a, const intptr RESTRICT b, const intptr RESTRICT c, int N) {
    #pragma omp parallel for
    for(int i = 0; i < N; i += 8) {
        #pragma unroll
        for(int k = 0; k < 8; ++k)
        a[i+k] = b[i+k] + c[i+k];
    }
}

with that code, clang 4.0 gives quite neat assembly :

...
 vmovdqu ymm0, ymmword ptr [rdx + 4*rcx]
 vpaddd  ymm0, ymm0, ymmword ptr [rsi + 4*rcx]
 vmovdqu ymmword ptr [rdi + 4*rcx], ymm0
...

For some reasons, you need to tweak your attributes and pragmas to have the same result with other compilers.

  1. Intrinsics

If you want to get sure you have the right assembly, then you must go to intrinsics / assembly.

Something simple like:

#define RESTRICT __restrict

typedef __attribute__((aligned(32))) int* intptr;

extern "C" void add(intptr RESTRICT a, const intptr RESTRICT b, const intptr RESTRICT c, int N) {
    #pragma omp parallel for
    for(int i = 0; i < N; i += 8) {
        __m256i vb = _mm256_load_si256((__m256i*) (b + i));
        __m256i vc = _mm256_load_si256((__m256i*) (c + i));
        _mm256_store_si256((__m256i*) (a + i), _mm256_add_epi32(vb, vc));
    }
}
  1. Non-temporal store: As a final optimization, you can use a non-temporal hint on the store instruction, since the other iteration of the loop won't read the value you just wrote:
typedef __attribute__((aligned(32))) int* intptr;
extern "C" void add(intptr RESTRICT a, const intptr RESTRICT b, const intptr RESTRICT c, int N) {
    #pragma omp parallel for
    for(int i = 0; i < N; i += 8) {
        __m256i vb = _mm256_load_si256((__m256i*) (b + i));
        __m256i vc = _mm256_load_si256((__m256i*) (c + i));
        _mm256_stream_si256((__m256i*) (a + i), _mm256_add_epi32(vb, vc));
    }
}

which gives you that assembly:

.L3:
        vmovdqa ymm0, YMMWORD PTR [rdx+rax]
        vpaddd  ymm0, ymm0, YMMWORD PTR [rsi+rax]
        vmovntdq        YMMWORD PTR [rdi+rax], ymm0
        add     rax, 32
        cmp     rcx, rax
    jne     .L3
    vzeroupper

If you feel worried about the cmp instruction at every step, you can unroll more steps in your loop, but branch prediction does a pretty good job on modern processors

[EDIT : add pthread] As stated above, pthread is a bit painful to manage... Here is a fully functional example with pthread :

#include <pthread.h>
#include <cstdlib>
#include <cstdio>
#include <immintrin.h>

typedef struct AddStruct {
    int *a, *b, *c;
    int N;
} AddStruct_t;

void* add(void* s);

int main() {
    const int N = 1024*1024*32; // out of cache
    int *a, *b, *c;
    int err;
    err = posix_memalign((void**) &a, 32, N*sizeof(int));
    err = posix_memalign((void**) &b, 32, N*sizeof(int));
    err = posix_memalign((void**) &c, 32, N*sizeof(int));
    for(int i = 0; i < N; ++i) {
        a[i] = 0;
        b[i] = 1;
        c[i] = i;
    }
int slice = N / 8;
pthread_t threads[8];
AddStruct_t arguments[8];
for(int i = 0; i < 8; ++i) {
    arguments[i].a = a + slice * i;
    arguments[i].b = b + slice * i;
    arguments[i].c = c + slice * i;
    arguments[i].N = slice;
}

for(int i = 0; i < 8; ++i) {
    if(pthread_create(&threads[i], NULL, add, &arguments[i])) {
        fprintf(stderr, "ERROR CREATING THREAD %d\n", i);
        abort();
    }
   }

for(int i = 0; i < 8; ++i) {
    pthread_join(threads[i], NULL);
}

for(int i = 0; i < N; ++i) {
    if(a[i] != i + 1) {
        fprintf(stderr, "ERROR AT %d: expected %d, actual %d\n", i, i+1, a[i]);
        abort();
    }
}

fprintf(stdout, "OK\n");
}

void* add(void* v) {
    AddStruct_t* s = (AddStruct_t*) v;
    for(int i = 0; i < s->N; i += 8) {
        __m256i vb = _mm256_load_si256((__m256i*) (s->b + i));
        __m256i vc = _mm256_load_si256((__m256i*) (s->c + i));
        _mm256_stream_si256((__m256i*) (s->a + i), _mm256_add_epi32(vb, vc));
    }
}

This code achieves 34 GB/s on my Xeon E5-1620 v3 with DDR4 memory @ 2133 MHz while the simple solution at the beginning is at 33 GB/S.

All those efforts to save 3% :). But sometimes those 3% can be critical.

Please note that the memory initialization should be performed by the same core which will perform the compute (particularly true for NUMA systems) to avoid page migrations.

like image 73
Regis Portalez Avatar answered Feb 07 '23 07:02

Regis Portalez