Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Cost of OpenMPI in C++

I have the following program C++ program which uses no communication, and the same identical work is done on all cores, I know that this doesn't use parallel processing at all:

unsigned n = 130000000;
std::vector<double>vec1(n,1.0);
std::vector<double>vec2(n,1.0);
double precision :: t1,t2,dt;
t1 = MPI_Wtime();
for (unsigned i = 0; i < n; i++) 
{
  // Do something so it's not a trivial loop
  vec1[i] = vec2[i]+i;
}
t2 = MPI_Wtime();
dt = t2-t1;

I'm running this program in a single node with two Intel® Xeon® Processor E5-2690 v3, so I have 24 cores all together. This is a dedicated node, no one else is using it. Since there is no communication, and each processor is doing the same amount of (identical) work, running it on multiple processors should give the same time. However, I get the following times (averaged time over all cores):

1 core: 0.237

2 cores: 0.240

4 cores: 0.241

8 cores: 0.261

16 cores: 0.454

What could cause the increase in time? Particularly for 16 cores. I have ran callgrind and I get the roughly same amount of data/instruction misses on all cores (the percentage of misses are the same).

I have repeated the same test on a node with two Intel® Xeon® Processor E5-2628L v2, (16 cores all together), I observe the same increase in execution times. Is this something to do with the MPI implementation?

like image 504
user302157 Avatar asked Feb 13 '16 18:02

user302157


3 Answers

Considering you are using ~2 GiB of memory per rank, your code is memory-bound. Except for prefetchers you are not operating within the cache but in main memory. You are simply hitting the memory bandwidth at a certain number of active cores.

Another aspect can be turbo mode, if enabled. Turbo mode can increase the core frequency to higher levels if less cores are utilized. As long as the memory bandwidth is not saturated, the higher frequency from turbo core will increase the bandwidth each core gets. This paper discusses the available aggregate memory bandwidth on Haswell processors depending on number of active cores and frequency (Fig 7./8.)

Please note that this has nothing to do with MPI / OpenMPI. You might as well launch the same program X times via any other mean.

like image 92
Zulan Avatar answered Nov 14 '22 13:11

Zulan


I suspect that there are common resources that should be used by your program, so when the number of them increases, there are delays, so that a resource is free'ed so that it can be used by the other process.

You see, you may have 24 cores, but that doesn't mean that all your system allows every core to do everything concurrent. As mentioned in the comments, the memory access is one thing that might cause delays (due to traffic), same thing for disk.

Also consider the interconnection network, which can also suffer from many accesses. In conclusion, notice that these hardware delays are enough to overwhelm the processing time.


General note: Remember how Efficiency of a program is defined:

E = S/p, where S is the speedup and p the number of nodes/processes/threads

Now take Scalability into account. Usually programs are weakly scalable, i.e. that you have to increase with the same rate the size of the problem and p. By increasing only the number of p, while keeping the size of your problem (n in your case) constant, while keeping Efficiency constant, yields a strongly Scalable program.

like image 45
gsamaras Avatar answered Nov 14 '22 13:11

gsamaras


Your program is not using parallel processing at all. Just because you have compiled it with OpenMP does not make it parallel.

To parallelize the for loop, for example, you need to use the different #pragma's OpenMP offer.

unsigned n = 130000000;
std::vector<double>vec1(n,1.0);
std::vector<double>vec2(n,1.0);
double precision :: t1,t2,dt;
t1 = MPI_Wtime();

#pragma omp parallel for
for (unsigned i = 0; i < n; i++) 
{
  // Do something so it's not a trivial loop
  vec1[i] = vec2[i]+i;
}
t2 = MPI_Wtime();
dt = t2-t1;

However, take into account that for large values of n, the impact of cache misses may hide the perfomance gained with multiple cores.

like image 20
mcleod_ideafix Avatar answered Nov 14 '22 15:11

mcleod_ideafix