Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

False sharing and pthreads

I have the following task to demonstrate false sharing and wrote a simple program:

#include <sys/times.h>
#include <time.h>
#include <stdio.h> 
#include <pthread.h> 

long long int tmsBegin1,tmsEnd1,tmsBegin2,tmsEnd2,tmsBegin3,tmsEnd3;

int array[100];

void *heavy_loop(void *param) { 
  int   index = *((int*)param);
  int   i;
  for (i = 0; i < 100000000; i++)
    array[index]+=3;
} 

int main(int argc, char *argv[]) { 
  int       first_elem  = 0;
  int       bad_elem    = 1;
  int       good_elem   = 32;
  long long time1;
  long long time2;
  long long time3;
  pthread_t     thread_1;
  pthread_t     thread_2;

  tmsBegin3 = clock();
  heavy_loop((void*)&first_elem);
  heavy_loop((void*)&bad_elem);
  tmsEnd3 = clock();

  tmsBegin1 = clock();
  pthread_create(&thread_1, NULL, heavy_loop, (void*)&first_elem);
  pthread_create(&thread_2, NULL, heavy_loop, (void*)&bad_elem);
  pthread_join(thread_1, NULL);
  pthread_join(thread_2, NULL);
  tmsEnd1 = clock(); 

  tmsBegin2 = clock();
  pthread_create(&thread_1, NULL, heavy_loop, (void*)&first_elem);
  pthread_create(&thread_2, NULL, heavy_loop, (void*)&good_elem);
  pthread_join(thread_1, NULL);
  pthread_join(thread_2, NULL);
  tmsEnd2 = clock();

  printf("%d %d %d\n", array[first_elem],array[bad_elem],array[good_elem]);
  time1 = (tmsEnd1-tmsBegin1)*1000/CLOCKS_PER_SEC;
  time2 = (tmsEnd2-tmsBegin2)*1000/CLOCKS_PER_SEC;
  time3 = (tmsEnd3-tmsBegin3)*1000/CLOCKS_PER_SEC;
  printf("%lld ms\n", time1);
  printf("%lld ms\n", time2);
  printf("%lld ms\n", time3);

  return 0; 
} 

I was very surprised when I saw the results (I run it on my i5-430M processor).

  • With false sharing, it was 1020 ms.
  • Without false sharing, it was 710 ms, only 30% faster instead of 300% (it was written on some sites that it would be faster than 300-400%).
  • Without using pthreads, it was 580 ms.

Please show me my mistake or explain why it happens.

like image 767
Alexey Matveev Avatar asked Nov 30 '11 18:11

Alexey Matveev


People also ask

What is true sharing and false sharing?

As I understand it, true sharing refers to the problem of multiple cores frequently writing to the same shared variable, while false sharing refers to multiple cores writing to different variables that are on the same cache line.

What is false sharing and how can we avoid it?

False sharing occurs when they repeatedly update their respective data in such a way that the cache line migrates back and forth between the two threads' caches. Often this can be avoided by giving explicit alignment pragmas to the compiler.

What is false sharing in distributed system?

False sharing occurs when processors in a shared-memory parallel system make references to different data objects within the same coherence block (cache line or page), thereby inducing unnecessary coherence operations.

How do you deal with false sharing?

Mitigation. There are ways of mitigating the effects of false sharing. For instance, false sharing in CPU caches can be prevented by reordering variables or adding padding (unused bytes) between variables. However, some of these program changes may increase the size of the objects, leading to higher memory use.


1 Answers

False sharing is a result of multiple cores with separate caches accessing the same region of physical memory (although not that same address -- that would be true sharing).

To understand false sharing, you need to understand caches. In most processors, each core will have its own L1 cache, which holds recently accessed data. Caches are organized in "lines", which are aligned chunks of data, usually 32 or 64 bytes in length (depending on your processor). When you read from an address that's not in the cache, the whole line is read from main memory (or an L2 cache) into L1. When you write to an address in the cache, the line containing that address is marked "dirty".

Here's where the sharing aspect comes in. If multiple cores are reading from the same line, they can each have a copy of the line in L1. However, if a copy is marked dirty, it invalidates the line in the other caches. If this didn't happen, then writes made on one core might not be visible to others cores until much later. So next time the other core goes to read from that line, the cache misses, and it has to fetch the line again.

False sharing occurs when the cores are reading and writing to different addresses on the same line. Even though they are not sharing data, the caches act like they are since they are so close.

This effect is highly dependent on the architecture of your processor. If you had a single core processor, you would not see the effect at all, since there would be no sharing. If your cache lines were longer, you would see the effect in both the "bad" and "good" cases, since they are still close together. If your cores did not share an L2 cache (which I'm guessing they do), you might see 300-400% difference as you said, since they would have to go all the way to main memory on a cache miss.

You might also like to know that it's important that each thread is both reading and writing (+= instead of =). Some processors have write-through caches which means if a core writes to an address not in the cache, it doesn't miss and fetch the line from memory. Contrast this with write-back caches, which do miss on writes.

like image 179
Jay Conrod Avatar answered Sep 20 '22 08:09

Jay Conrod