Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Optimizing code using Intel SSE intrinsics for vectorization

Tags:

c

sse

sse3

sse4

This is my very first time working with SSE intrinsics. I am trying to convert a simple piece of code into a faster version using Intel SSE intrinsic (up to SSE4.2). I seem to encounter a number of errors.

The scalar version of the code is: (simple matrix multiplication)

     void mm(int n, double *A, double *B, double *C)
     {
        int i,j,k;
        double tmp;

        for(i = 0; i < n; i++)
            for(j = 0; j < n; j++) {
                    tmp = 0.0;
                    for(k = 0; k < n; k++)
                            tmp += A[n*i+k] *
                                   B[n*k+j];
                    C[n*i+j] = tmp;

              }
            }

This is my version: I have included #include <ia32intrin.h>

      void mm_sse(int n, double *A, double *B, double *C)
      {
        int i,j,k;
        double tmp;
        __m128d a_i, b_i, c_i;

        for(i = 0; i < n; i++)
            for(j = 0; j < n; j++) {
                    tmp = 0.0;
                    for(k = 0; k < n; k+=4)
                            a_i = __mm_load_ps(&A[n*i+k]);
                            b_i = __mm_load_ps(&B[n*k+j]);
                            c_i = __mm_load_ps(&C[n*i+j]);

                            __m128d tmp1 = __mm_mul_ps(a_i,b_i);
                            __m128d tmp2 = __mm_hadd_ps(tmp1,tmp1);
                            __m128d tmp3 = __mm_add_ps(tmp2,tmp3);
                            __mm_store_ps(&C[n*i+j], tmp3);

            }
         }

Where am I going wrong with this? I am getting several errors like this:

mm_vec.c(84): error: a value of type "int" cannot be assigned to an entity of type "__m128d" a_i = __mm_load_ps(&A[n*i+k]);

This is how I am compiling: icc -O2 mm_vec.c -o vec

Can someone please assist me converting this code accurately. Thanks!

UPDATE:

According to your suggestions, I have made the following changes:

       void mm_sse(int n, float *A, float *B, float *C)
       {
         int i,j,k;
         float tmp;
         __m128 a_i, b_i, c_i;

         for(i = 0; i < n; i++)
            for(j = 0; j < n; j++) {
                    tmp = 0.0;
                    for(k = 0; k < n; k+=4)
                            a_i = _mm_load_ps(&A[n*i+k]);
                            b_i = _mm_load_ps(&B[n*k+j]);
                            c_i = _mm_load_ps(&C[n*i+j]);

                            __m128 tmp1 = _mm_mul_ps(a_i,b_i);
                            __m128 tmp2 = _mm_hadd_ps(tmp1,tmp1);
                            __m128 tmp3 = _mm_add_ps(tmp2,tmp3);
                            _mm_store_ps(&C[n*i+j], tmp3);


            }
        }

But now I seem to be getting a Segmentation fault. I know this perhaps because I am not accessing the array subscripts properly for array A,B,C. I am very new to this and not sure how to proceed with this.

Please help me determine the correct approach towards handling this code.

like image 227
PGOnTheGo Avatar asked Jun 08 '12 16:06

PGOnTheGo


People also ask

What is SSE vectorization?

Vectorization is the task of converting a scalar code to a code using SSE instructions. The benefit is that a single instruction will process several elements at the same time (SIMD: Single Instruction, Multiple Data).

How do you vectorize in C++?

There are two ways to vectorize a loop computation in a C/C++ program. Programmers can use intrinsics inside the C/C++ source code to tell compilers to generate specific SIMD instructions so as to vectorize the loop computation. Or, compilers may be setup to vectorize the loop computation automatically.

What is SIMD optimization?

NumPy comes with a flexible working mechanism that allows it to harness the SIMD features that CPUs own, in order to provide faster and more stable performance on all popular platforms. Currently, NumPy supports the X86, IBM/Power, ARM7 and ARM8 architectures.

What is SIMD code?

2.1. SIMD is short for Single Instruction/Multiple Data, while the term SIMD operations refers to a computing method that enables processing of multiple data with a single instruction. In contrast, the conventional sequential approach using one instruction to process each individual data is called scalar operations.


1 Answers

The error you're seeing is because you have too many underscores in the function names, e.g.:

__mm_mul_ps

should be:

_mm_mul_ps // Just one underscore up front

so the C compiler is assuming they return int since it hasn't seen a declaration.

Beyond this though there's further problems - you seem to be mixing calls to double and single float variants of the same instruction.

For example you have:

__m128d a_i, b_i, c_i;

but you call:

__mm_load_ps(&A[n*i+k]);

which returns a __m128 not a __m128d - you wanted to call:

_mm_load_pd

instead. Likewise for the other instructions if you want them to work on pairs of doubles.


If you're seeing unexplained segmentation faults and in SSE code I'd be inclined to guess that you've got memory alignment problems - pointers passed to SSE intrinsics (mostly1) need to be 16 byte aligned. You can check this with a simple assert in your code, or check it in a debugger (you expect the last digit of the pointer to be 0 if it's aligned properly).

If it isn't aligned right you need to make sure it is. For things not allocated with new/malloc() you can do this with a compiler extension (e.g. with gcc):

float a[16] __attribute__ ((aligned (16)));

Provided your version of gcc has a max alignment large enough to support this and a few other caveats about stack alignment. For dynamically allocated storage you'll want to use a platform specific extension, e.g. posix_memalign to allocate suitable storage:

float *a=NULL;
posix_memalign(&a, __alignof__(__m128), sizeof(float)*16);

(I think there might be nicer, portable ways of doing this with C++11 but I'm not 100% sure on that yet).

1 There are some instructions which allow you do to unaligned loads and stores, but they're terribly slow compared to aligned loads and worth avoiding if at all possible.

like image 186
Flexo Avatar answered Sep 16 '22 22:09

Flexo