Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why vectorizing the loop does not have performance improvement

I am investigating the effect of vectorization on the performance of the program. In this regard, I have written following code:

#include <stdio.h> #include <sys/time.h> #include <stdlib.h>  #define LEN 10000000  int main(){      struct timeval stTime, endTime;      double* a = (double*)malloc(LEN*sizeof(*a));     double* b = (double*)malloc(LEN*sizeof(*b));     double* c = (double*)malloc(LEN*sizeof(*c));      int k;     for(k = 0; k < LEN; k++){         a[k] = rand();         b[k] = rand();     }      gettimeofday(&stTime, NULL);      for(k = 0; k < LEN; k++)         c[k] = a[k] * b[k];      gettimeofday(&endTime, NULL);      FILE* fh = fopen("dump", "w");     for(k = 0; k < LEN; k++)         fprintf(fh, "c[%d] = %f\t", k, c[k]);     fclose(fh);      double timeE = (double)(endTime.tv_usec + endTime.tv_sec*1000000 - stTime.tv_usec - stTime.tv_sec*1000000);      printf("Time elapsed: %f\n", timeE);      return 0; } 

In this code, I am simply initializing and multiplying two vectors. The results are saved in vector c. What I am mainly interested in is the effect of vectorizing following loop:

for(k = 0; k < LEN; k++)     c[k] = a[k] * b[k]; 

I compile the code using following two commands:

1) icc -O2 TestSMID.c -o TestSMID -no-vec -no-simd 2) icc -O2 TestSMID.c -o TestSMID -vec-report2 

I expect to see performance improvement since the second command successfully vectorizes the loop. However, my studies show that there is no performance improvement when the loop is vectorized.

I may have missed something here since I am not super familiar with the topic. So, please let me know if there is something wrong with my code.

Thanks in advance for your help.

PS: I am using Mac OSX, so there is no need to align the data as all the allocated memories are 16-byte aligned.

Edit: I would like to first thank you all for your comments and answers. I thought about the answer proposed by @Mysticial and there are some further points that should be mentioned here. Firstly, as @Vinska mentioned, c[k]=a[k]*b[k] does not take only one cycle. In addition to loop index increment and the comparison made to ensure that k is smaller than LEN, there are other things to be done to perform the operation. Having a look at the assembly code generated by the compiler, it can be seen that a simple multiplication needs much more than one cycle. The vectorized version looks like:

L_B1.9:                         # Preds L_B1.8         movq      %r13, %rax                                    #25.5         andq      $15, %rax                                     #25.5         testl     %eax, %eax                                    #25.5         je        L_B1.12       # Prob 50%                      #25.5                                 # LOE rbx r12 r13 r14 r15 eax L_B1.10:                        # Preds L_B1.9         testb     $7, %al                                       #25.5         jne       L_B1.32       # Prob 10%                      #25.5                                 # LOE rbx r12 r13 r14 r15 L_B1.11:                        # Preds L_B1.10         movsd     (%r14), %xmm0                                 #26.16         movl      $1, %eax                                      #25.5         mulsd     (%r15), %xmm0                                 #26.23         movsd     %xmm0, (%r13)                                 #26.9                                 # LOE rbx r12 r13 r14 r15 eax L_B1.12:                        # Preds L_B1.11 L_B1.9         movl      %eax, %edx                                    #25.5         movl      %eax, %eax                                    #26.23         negl      %edx                                          #25.5         andl      $1, %edx                                      #25.5         negl      %edx                                          #25.5         addl      $10000000, %edx                               #25.5         lea       (%r15,%rax,8), %rcx                           #26.23         testq     $15, %rcx                                     #25.5         je        L_B1.16       # Prob 60%                      #25.5                                 # LOE rdx rbx r12 r13 r14 r15 eax L_B1.13:                        # Preds L_B1.12         movl      %eax, %eax                                    #25.5                                 # LOE rax rdx rbx r12 r13 r14 r15 L_B1.14:                        # Preds L_B1.14 L_B1.13         movups    (%r15,%rax,8), %xmm0                          #26.23         movsd     (%r14,%rax,8), %xmm1                          #26.16         movhpd    8(%r14,%rax,8), %xmm1                         #26.16         mulpd     %xmm0, %xmm1                                  #26.23         movntpd   %xmm1, (%r13,%rax,8)                          #26.9         addq      $2, %rax                                      #25.5         cmpq      %rdx, %rax                                    #25.5         jb        L_B1.14       # Prob 99%                      #25.5         jmp       L_B1.20       # Prob 100%                     #25.5                                 # LOE rax rdx rbx r12 r13 r14 r15 L_B1.16:                        # Preds L_B1.12         movl      %eax, %eax                                    #25.5                                 # LOE rax rdx rbx r12 r13 r14 r15 L_B1.17:                        # Preds L_B1.17 L_B1.16         movsd     (%r14,%rax,8), %xmm0                          #26.16         movhpd    8(%r14,%rax,8), %xmm0                         #26.16         mulpd     (%r15,%rax,8), %xmm0                          #26.23         movntpd   %xmm0, (%r13,%rax,8)                          #26.9         addq      $2, %rax                                      #25.5         cmpq      %rdx, %rax                                    #25.5         jb        L_B1.17       # Prob 99%                      #25.5                                 # LOE rax rdx rbx r12 r13 r14 r15 L_B1.18:                        # Preds L_B1.17         mfence                                                  #25.5                                 # LOE rdx rbx r12 r13 r14 r15 L_B1.19:                        # Preds L_B1.18         mfence                                                  #25.5                                 # LOE rdx rbx r12 r13 r14 r15 L_B1.20:                        # Preds L_B1.14 L_B1.19 L_B1.32         cmpq      $10000000, %rdx                               #25.5         jae       L_B1.24       # Prob 0%                       #25.5                                 # LOE rdx rbx r12 r13 r14 r15 L_B1.22:                        # Preds L_B1.20 L_B1.22         movsd     (%r14,%rdx,8), %xmm0                          #26.16         mulsd     (%r15,%rdx,8), %xmm0                          #26.23         movsd     %xmm0, (%r13,%rdx,8)                          #26.9         incq      %rdx                                          #25.5         cmpq      $10000000, %rdx                               #25.5         jb        L_B1.22       # Prob 99%                      #25.5                                 # LOE rdx rbx r12 r13 r14 r15 L_B1.24:                        # Preds L_B1.22 L_B1.20 

And non-vectoized version is:

L_B1.9:                         # Preds L_B1.8         xorl      %eax, %eax                                    #25.5                                 # LOE rbx r12 r13 r14 r15 eax L_B1.10:                        # Preds L_B1.10 L_B1.9         lea       (%rax,%rax), %edx                             #26.9         incl      %eax                                          #25.5         cmpl      $5000000, %eax                                #25.5         movsd     (%r15,%rdx,8), %xmm0                          #26.16         movsd     8(%r15,%rdx,8), %xmm1                         #26.16         mulsd     (%r13,%rdx,8), %xmm0                          #26.23         mulsd     8(%r13,%rdx,8), %xmm1                         #26.23         movsd     %xmm0, (%rbx,%rdx,8)                          #26.9         movsd     %xmm1, 8(%rbx,%rdx,8)                         #26.9         jb        L_B1.10       # Prob 99%                      #25.5                                 # LOE rbx r12 r13 r14 r15 eax 

Beside this, the processor does not load only 24 bytes. In each access to memory, a full line (64 bytes) is loaded. More importantly, since the memory required for a, b, and c is contiguous, prefetcher would definitely help a lot and loads next blocks in advance. Having said that, I think the memory bandwidth calculated by @Mysticial is too pessimistic.

Moreover, using SIMD to improve the performance of program for a very simple addition is mentioned in Intel Vectorization Guide. Therefore, it seems we should be able to gain some performance improvement for this very simple loop.

Edit2: Thanks again for your comments. Also, thank to @Mysticial sample code, I finally saw the effect of SIMD on performance improvement. The problem, as Mysticial mentioned, was the memory bandwidth. With choosing small size for a, b, and c which fit into the L1 cache, it can be seen that SIMD can help to improve the performance significantly. Here are the results that I got:

icc -O2 -o TestSMIDNoVec -no-vec TestSMID2.c: 17.34 sec  icc -O2 -o TestSMIDVecNoUnroll -vec-report2 TestSMID2.c: 9.33 sec 

And unrolling the loop improves the performance even further:

icc -O2 -o TestSMIDVecUnroll -vec-report2 TestSMID2.c -unroll=8: 8.6sec 

Also, I should mention that it takes only one cycle for my processor to complete an iteration when compiled with -O2.

PS: My computer is a Macbook Pro core i5 @2.5GHz (dual core)

like image 461
Pouya Avatar asked Aug 10 '13 06:08

Pouya


People also ask

Why vectorization is faster than loops?

A major reason why vectorization is faster than its for loop counterpart is due to the underlying implementation of Numpy operations. As many of you know (if you're familiar with Python), Python is a dynamically typed language.

What is vectorization and how does it improves performance?

Vectorization is the process of converting an algorithm from operating on a single value at a time to operating on a set of values (vector) at one time. Modern CPUs provide direct support for vector operations where a single instruction is applied to multiple data (SIMD).

What is difference between vectorization and loops?

"Vectorization" (simplified) is the process of rewriting a loop so that instead of processing a single element of an array N times, it processes (say) 4 elements of the array simultaneously N/4 times.

What does it mean to vectorize a loop?

Loop vectorization transforms procedural loops by assigning a processing unit to each pair of operands. Programs spend most of their time within such loops. Therefore, vectorization can significantly accelerate them, especially over large data sets.


1 Answers

This original answer was valid back in 2013. As of 2017 hardware, things have changed enough that both the question and the answer are out-of-date.

See the end of this answer for the 2017 update.


Original Answer (2013):

Because you're bottlenecked by memory bandwidth.

While vectorization and other micro-optimizations can improve the speed of computation, they can't increase the speed of your memory.

In your example:

for(k = 0; k < LEN; k++)     c[k] = a[k] * b[k]; 

You are making a single pass over all the memory doing very little work. This is maxing out your memory bandwidth.

So regardless of how it's optimized, (vectorized, unrolled, etc...) it isn't gonna get much faster.


A typical desktop machine of 2013 has on the order of 10 GB/s of memory bandwidth*.
Your loop touches 24 bytes/iteration.

Without vectorization, a modern x64 processor can probably do about 1 iteration a cycle*.

Suppose you're running at 4 GHz:

  • (4 * 10^9) * 24 bytes/iteration = 96 GB/s

That's almost 10x of your memory bandwidth - without vectorization.


*Not surprisingly, a few people doubted the numbers I gave above since I gave no citation. Well those were off the top of my head from experience. So here's some benchmarks to prove it.

The loop iteration can run as fast as 1 cycle/iteration:

We can get rid of the memory bottleneck if we reduce LEN so that it fits in cache.
(I tested this in C++ since it was easier. But it makes no difference.)

#include <iostream> #include <time.h> using std::cout; using std::endl;  int main(){     const int LEN = 256;      double *a = (double*)malloc(LEN*sizeof(*a));     double *b = (double*)malloc(LEN*sizeof(*a));     double *c = (double*)malloc(LEN*sizeof(*a));      int k;     for(k = 0; k < LEN; k++){         a[k] = rand();         b[k] = rand();     }      clock_t time0 = clock();      for (int i = 0; i < 100000000; i++){         for(k = 0; k < LEN; k++)             c[k] = a[k] * b[k];     }      clock_t time1 = clock();     cout << (double)(time1 - time0) / CLOCKS_PER_SEC << endl; } 
  • Processor: Intel Core i7 2600K @ 4.2 GHz
  • Compiler: Visual Studio 2012
  • Time: 6.55 seconds

In this test, I ran 25,600,000,000 iterations in only 6.55 seconds.

  • 6.55 * 4.2 GHz = 27,510,000,000 cycles
  • 27,510,000,000 / 25,600,000,000 = 1.074 cycles/iteration

Now if you're wondering how it's possible to do:

  • 2 loads
  • 1 store
  • 1 multiply
  • increment counter
  • compare + branch

all in one cycle...

It's because modern processors and compilers are awesome.

While each of these operations have latency (especially the multiply), the processor is able to execute multiple iterations at the same time. My test machine is a Sandy Bridge processor, which is capable of sustaining 2x128b loads, 1x128b store, and 1x256b vector FP multiply every single cycle. And potentially another one or two vector or integer ops, if the loads are memory source operands for micro-fused uops. (2 loads + 1 store throughput only when using 256b AVX loads/stores, otherwise only two total memory ops per cycle (at most one store)).

Looking at the assembly (which I'll omit for brevity), it seems that the compiler unrolled the loop, thereby reducing the looping overhead. But it didn't quite manage to vectorize it.


Memory bandwidth is on the order of 10 GB/s:

The easiest way to test this is via a memset():

#include <iostream> #include <time.h> using std::cout; using std::endl;  int main(){     const int LEN = 1 << 30;    //  1GB      char *a = (char*)calloc(LEN,1);      clock_t time0 = clock();      for (int i = 0; i < 100; i++){         memset(a,0xff,LEN);     }      clock_t time1 = clock();     cout << (double)(time1 - time0) / CLOCKS_PER_SEC << endl; } 
  • Processor: Intel Core i7 2600K @ 4.2 GHz
  • Compiler: Visual Studio 2012
  • Time: 5.811 seconds

So it takes my machine 5.811 seconds to write to 100 GB of memory. That's about 17.2 GB/s.

And my processor is on the higher end. The Nehalem and Core 2 generation processors have less memory bandwidth.


Update March 2017:

As of 2017, things have gotten more complicated.

Thanks to DDR4 and quad-channel memory, it is no longer possible for a single thread to saturate memory bandwidth. But the problem of bandwidth doesn't necessarily go away. Even though bandwidth has gone up, processor cores have also improved - and there are more of them.

To put it mathematically:

  • Each core has a bandwidth limit X.
  • Main memory has a bandwidth limit of Y.
  • On older systems, X > Y.
  • On current high-end systems, X < Y. But X * (# of cores) > Y.

Back in 2013: Sandy Bridge @ 4 GHz + dual-channel DDR3 @ 1333 MHz

  • No vectorization (8-byte load/stores): X = 32 GB/s and Y = ~17 GB/s
  • Vectorized SSE* (16-byte load/stores): X = 64 GB/s and Y = ~17 GB/s

Now in 2017: Haswell-E @ 4 GHz + quad-channel DDR4 @ 2400 MHz

  • No vectorization (8-byte load/stores): X = 32 GB/s and Y = ~70 GB/s
  • Vectorized AVX* (32-byte load/stores): X = 64 GB/s and Y = ~70 GB/s

(For both Sandy Bridge and Haswell, architectural limits in the cache will limit bandwidth to about 16 bytes/cycle regardless of SIMD width.)

So nowadays, a single thread will not always be able to saturate memory bandwidth. And you will need to vectorize to achieve that limit of X. But you will still hit the main memory bandwidth limit of Y with 2 or more threads.

But one thing hasn't changed and probably won't change for a long time: You will not be able to run a bandwidth-hogging loop on all cores without saturating the total memory bandwidth.

like image 133
Mysticial Avatar answered Oct 05 '22 16:10

Mysticial