Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Explanation for why allocating a second time changes performance

I was testing some micro benchmarks on dense matrix multiplication (as a curiosity), and I noticed some very strange performance results.

Here is a minimal working example:

#include <benchmark/benchmark.h>

#include <random>

constexpr long long n = 128;

struct mat_bench_fixture : public benchmark::Fixture
{
  double *matA, *matB, *matC;

  mat_bench_fixture()
  {
    matA = new double[n * n];
    matB = new double[n * n];
    matC = new double[n * n];
    benchmark::DoNotOptimize(matA);
    benchmark::DoNotOptimize(matB);
    benchmark::DoNotOptimize(matC);
#if 0
    delete[] matA;
    delete[] matB;
    delete[] matC;
    benchmark::DoNotOptimize(matA);
    benchmark::DoNotOptimize(matB);
    benchmark::DoNotOptimize(matC);
    matA = new double[n * n];
    matB = new double[n * n];
    matC = new double[n * n];
    benchmark::DoNotOptimize(matA);
    benchmark::DoNotOptimize(matB);
    benchmark::DoNotOptimize(matC);
#endif
  }

  ~mat_bench_fixture()
  {
    delete[] matA;
    delete[] matB;
    delete[] matC;
  }

  void SetUp(const benchmark::State& s) override
  {
    // generate random data
    std::mt19937 gen;
    std::uniform_real_distribution<double> dis(0, 1);
    for (double* i = matA; i != matA + n * n; ++i)
    {
      *i = dis(gen);
    }
    for (double* i = matB; i != matB + n * n; ++i)
    {
      *i = dis(gen);
    }
  }
};

BENCHMARK_DEFINE_F(mat_bench_fixture, impl1)(benchmark::State& st)
{
  for (auto _ : st)
  {
    for (long long row = 0; row < n; ++row)
    {
      for (long long col = 0; col < n; ++col)
      {
        matC[row * n + col] = 0;
        for (long long k = 0; k < n; ++k)
        {
          matC[row * n + col] += matA[row * n + k] * matB[k * n + col];
        }
      }
    }
    benchmark::DoNotOptimize(matA);
    benchmark::DoNotOptimize(matB);
    benchmark::DoNotOptimize(matC);
    benchmark::ClobberMemory();
  }
}

BENCHMARK_REGISTER_F(mat_bench_fixture, impl1);

BENCHMARK_MAIN();

There is an #if 0 block in the constructor of the fixture which can be toggled to #if 1 for the two different scenarios I'm testing. What I've noticed is that for some reason when I force a re-allocation of all the buffers, for some reason on my system the time it takes for the benchmark to run magically improves by about 15%, and I have no explanation why this is happening. I was hoping someone could enlighten me on this. I would also like to know if there are any additional micro-benchmarking "best practices" suggestions for avoiding weird performance anomalies like this in the future.

How I'm compiling (assuming Google Benchmark has been installed somewhere it can be found):

$CC -o mult_test mult_test.cpp -std=c++14 -pthread -O3 -fno-omit-frame-pointer -lbenchmark

I've been running this with:

./mult_test --benchmark_repetitions=5

I'm doing all my testing in Ubuntu 18.04 x64 (Kernel version 4.15.0-30-generic)

I've tried several different variations of this code, and they all give the same basic result over multiple runs (it's surprising how consistent the results are for me):

  1. Move allocation/initialization inside of the benchmark "SetUp" phase (non-timed part) so that the allocation/deallocation happens every new sample point
  2. Switched compilers between GCC 7.3.0 and Clang 6.0.0
  3. Tried different Computers with different CPU's (Intel i5-6600K, and one with dual socket Xeon E5-2630 v2)
  4. Tried different methods for implementing the benchmark framework (i.e. not using Google Benchmark at all and manually implementing timing via std::chrono)
  5. Forcing all buffers to be aligned to several different boundaries (64 bytes, 128 bytes, 256 bytes)
  6. Forcing a fixed number of iterations in each sample timing period
  7. Tried running with a much higher number of repetitions (20+)
  8. Forced a constant CPU clock frequency using the performance governor
  9. Tried different compiler flags for optimization options (removed no-omit-frame-pointer, tried -march=native)
  10. I've tried using std::vector for managing the storage, using new[]/delete[] pairs, and malloc/free. They all give similar results.

I've compared the assembly of the hot portion of the code, and it's identical between the two test cases (screenshot from perf for one of the cases):

40:
  mov 0xc0(%r15),$rcx
  mov 0xd0(%r15),%rdx
  add $0x8,$rcx
  move 0xc8(%r15),%r9
  add %r8,%r9
  xor %r10d,%r10d
  nop
60: 
  mov %r10,%r11
  shl $0x7,$r11
  mov %r9,%r13
  xor %esi,%esi
  nop
70:
  lea (%rsi,%r11,1),%rax
  movq %0x0,(%rdx,%rax,8)
  xordp %xmm0,%xmm0
  mov $0xffffffffffffff80,%rdi
  mov %r13,%rbx
  nop
90:
  movsd 0x3f8(%rcx,%rdi,8),%xmm1
  mulsd -0x400(%rbx),%xmm1
  addsd %xmm0,%xmm1
  movsd %xmm1,(%rdx,%rax,8)
  movsd 0x400(%rcs,%rdi,8),%xmm0
  mulsd (%rbx),%xmm0
  addsd %xmm1,%xmm0
  movsd %xmm0,(%rdx,%rax,8)
  add $0x800,%rbx
  add $0x2,%rdi
  jne 90
  add $0x1,%rsi
  add $0x8,%r13
  cmp $0x80,%rsi
  jne 70
  add $0x1,%r10
  add $0x400,%rcx
  cmp $0x80,%r10
  jne 60
  add $0xffffffffffffffff,%r12
  jne 40

Here is a representative screenshot of perf stat for not performing a re-allocation:

Running ./mult_test
Run on (4 X 4200 MHz CPU s)
CPU Caches:
  L1 Data 32K (x4)
  L1 Instruction 32K (x4)
  L2 Unified 256K (x4)
  L3 Unified 6144K (x1)
----------------------------------------------------------------------
Benchmark                               Time           CPU Iterations
----------------------------------------------------------------------
mat_bench_fixture/impl1           2181531 ns    2180896 ns        322
mat_bench_fixture/impl1           2188280 ns    2186860 ns        322
mat_bench_fixture/impl1           2182988 ns    2182150 ns        322
mat_bench_fixture/impl1           2182715 ns    2182025 ns        322
mat_bench_fixture/impl1           2175719 ns    2175653 ns        322
mat_bench_fixture/impl1_mean      2182246 ns    2181517 ns        322
mat_bench_fixture/impl1_median    2182715 ns    2182025 ns        322
mat_bench_fixture/impl1_stddev       4480 ns       4000 ns        322

 Performance counter stats for './mult_test --benchmark_repetitions=5':

       3771.370173      task-clock (msec)         #    0.994 CPUs utilized          
               223      context-switches          #    0.059 K/sec                  
                 0      cpu-migrations            #    0.000 K/sec                  
               242      page-faults               #    0.064 K/sec                  
    15,808,590,474      cycles                    #    4.192 GHz                      (61.31%)
    20,201,201,797      instructions              #    1.28  insn per cycle           (69.04%)
     1,844,097,332      branches                  #  488.973 M/sec                    (69.04%)
           358,319      branch-misses             #    0.02% of all branches          (69.14%)
     7,232,957,363      L1-dcache-loads           # 1917.859 M/sec                    (69.24%)
     3,774,591,187      L1-dcache-load-misses     #   52.19% of all L1-dcache hits    (69.35%)
       558,507,528      LLC-loads                 #  148.091 M/sec                    (69.46%)
            93,136      LLC-load-misses           #    0.02% of all LL-cache hits     (69.47%)
   <not supported>      L1-icache-loads                                             
           736,008      L1-icache-load-misses                                         (69.47%)
     7,242,324,412      dTLB-loads                # 1920.343 M/sec                    (69.34%)
               581      dTLB-load-misses          #    0.00% of all dTLB cache hits   (61.50%)
             1,582      iTLB-loads                #    0.419 K/sec                    (61.39%)
               307      iTLB-load-misses          #   19.41% of all iTLB cache hits   (61.29%)
   <not supported>      L1-dcache-prefetches                                        
   <not supported>      L1-dcache-prefetch-misses                                   

       3.795924436 seconds time elapsed

Here is a representative screenshot of perf stat for forcing a re-allocation:

Running ./mult_test
Run on (4 X 4200 MHz CPU s)
CPU Caches:
  L1 Data 32K (x4)
  L1 Instruction 32K (x4)
  L2 Unified 256K (x4)
  L3 Unified 6144K (x1)
----------------------------------------------------------------------
Benchmark                               Time           CPU Iterations
----------------------------------------------------------------------
mat_bench_fixture/impl1           1862961 ns    1862919 ns        376
mat_bench_fixture/impl1           1861986 ns    1861947 ns        376
mat_bench_fixture/impl1           1860330 ns    1860305 ns        376
mat_bench_fixture/impl1           1859711 ns    1859652 ns        376
mat_bench_fixture/impl1           1863299 ns    1863273 ns        376
mat_bench_fixture/impl1_mean      1861658 ns    1861619 ns        376
mat_bench_fixture/impl1_median    1861986 ns    1861947 ns        376
mat_bench_fixture/impl1_stddev       1585 ns       1591 ns        376

 Performance counter stats for './mult_test --benchmark_repetitions=5':

       3724.287293      task-clock (msec)         #    0.995 CPUs utilized          
                11      context-switches          #    0.003 K/sec                  
                 0      cpu-migrations            #    0.000 K/sec                  
               246      page-faults               #    0.066 K/sec                  
    15,612,924,579      cycles                    #    4.192 GHz                      (61.34%)
    23,344,859,019      instructions              #    1.50  insn per cycle           (69.07%)
     2,130,528,330      branches                  #  572.063 M/sec                    (69.07%)
           331,651      branch-misses             #    0.02% of all branches          (69.08%)
     8,369,233,786      L1-dcache-loads           # 2247.204 M/sec                    (69.18%)
     4,206,241,296      L1-dcache-load-misses     #   50.26% of all L1-dcache hits    (69.29%)
       308,687,646      LLC-loads                 #   82.885 M/sec                    (69.40%)
            94,288      LLC-load-misses           #    0.03% of all LL-cache hits     (69.50%)
   <not supported>      L1-icache-loads                                             
           475,066      L1-icache-load-misses                                         (69.50%)
     8,360,570,315      dTLB-loads                # 2244.878 M/sec                    (69.37%)
               364      dTLB-load-misses          #    0.00% of all dTLB cache hits   (61.53%)
               213      iTLB-loads                #    0.057 K/sec                    (61.42%)
               144      iTLB-load-misses          #   67.61% of all iTLB cache hits   (61.32%)
   <not supported>      L1-dcache-prefetches                                        
   <not supported>      L1-dcache-prefetch-misses                                   

       3.743017809 seconds time elapsed

Here is a minimal working example which doesn't have any external dependencies, and allows for testing memory alignment issues:

#include <random>
#include <chrono>
#include <iostream>
#include <cstdlib>

constexpr long long n = 128;
constexpr size_t alignment = 64;

inline void escape(void* p)
{
  asm volatile("" : : "g"(p) : "memory");
}
inline void clobber()
{
  asm volatile("" : : : "memory");
}

struct mat_bench_fixture
{
  double *matA, *matB, *matC;

  mat_bench_fixture()
  {
    matA = (double*) aligned_alloc(alignment, sizeof(double) * n * n);
    matB = (double*) aligned_alloc(alignment, sizeof(double) * n * n);
    matC = (double*) aligned_alloc(alignment, sizeof(double) * n * n);
    escape(matA);
    escape(matB);
    escape(matC);
#if 0
    free(matA);
    free(matB);
    free(matC);
    escape(matA);
    escape(matB);
    escape(matC);
    matA = (double*) aligned_alloc(alignment, sizeof(double) *n * n);
    matB = (double*) aligned_alloc(alignment, sizeof(double) *n * n);
    matC = (double*) aligned_alloc(alignment, sizeof(double) *n * n);
    escape(matA);
    escape(matB);
    escape(matC);
#endif
  }

  ~mat_bench_fixture()
  {
    free(matA);
    free(matB);
    free(matC);
  }

  void SetUp()
  {
    // generate random data
    std::mt19937 gen;
    std::uniform_real_distribution<double> dis(0, 1);
    for (double* i = matA; i != matA + n * n; ++i)
    {
      *i = dis(gen);
    }
    for (double* i = matB; i != matB + n * n; ++i)
    {
      *i = dis(gen);
    }
  }
  void run()
  {
    constexpr int iters = 400;
    std::chrono::high_resolution_clock timer;
    auto start = timer.now();
    for (int i = 0; i < iters; ++i)
    {
      for (long long row = 0; row < n; ++row)
      {
        for (long long col = 0; col < n; ++col)
        {
          matC[row * n + col] = 0;
          for (long long k = 0; k < n; ++k)
          {
            matC[row * n + col] += matA[row * n + k] * matB[k * n + col];
          }
        }
      }
      escape(matA);
      escape(matB);
      escape(matC);
      clobber();
    }
    auto stop = timer.now();
    std::cout << std::chrono::duration_cast<std::chrono::nanoseconds>(
                   stop - start)
                     .count() /
                   iters
              << std::endl;
  }
};

int main()
{
  mat_bench_fixture bench;
  for (int i = 0; i < 5; ++i)
  {
    bench.SetUp();
    bench.run();
  }
}

To compile:

g++ -o mult_test mult_test.cpp -std=c++14 -O3
like image 920
helloworld922 Avatar asked Oct 17 '22 14:10

helloworld922


1 Answers

In my machine, I can reproduce your case by using different alignments for pointers. Try this code:

mat_bench_fixture() {
    matA = new double[n * n + 256];
    matB = new double[n * n + 256];
    matC = new double[n * n + 256];

    // align pointers to 1024
    matA = reinterpret_cast<double*>((reinterpret_cast<unsigned long long>(matA) + 1023)&~1023);
    matB = reinterpret_cast<double*>((reinterpret_cast<unsigned long long>(matB) + 1023)&~1023);
    matC = reinterpret_cast<double*>((reinterpret_cast<unsigned long long>(matC) + 1023)&~1023);

    // toggle this to toggle alignment offset of matB
    // matB += 2;
}

If I toggle the commented line in this code, I got 34% difference on my machine.

Different alignment offsets cause different timings. You can play with offsetting the other 2 pointers too. Sometimes the difference is smaller, sometimes bigger, sometimes there's no change.

This must be caused by a cache issue: by having different last bits of the pointers, different collision patterns occur in the cache. And as your routine is memory intensive (all the data doesn't fit into L1), cache performance matters a lot.

like image 137
geza Avatar answered Oct 19 '22 10:10

geza