Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Efficient memory bandwidth use for streaming

I have an application that streams through 250 MB of data, applying a simple and fast neural-net threshold function to the data chunks (which are just 2 32-bit words each). Based on the result of the (very simple) compute, the chunk is unpredictably pushed into one of 64 bins. So it's one big stream in and 64 shorter (variable length) streams out.

This is repeated many times with different detection functions.

The compute is memory bandwidth limited. I can tell this because there's no speed change even if I use a discriminant function that's much more computationally intensive.

What is the best way to structure the writes of the new streams to optimize my memory bandwidth? I am especially thinking that understanding cache use and cache line size may play a big role in this. Imagine the worst case where I have my 64 output streams and by bad luck, many map to the same cache line. Then when I write the next 64 bits of data to a stream, the CPU has to flush out a stale cache line to main memory, and load in the proper cache line. Each of those uses 64 BYTES of bandwidth... so my bandwidth limited application may be wasting 95% of the memory bandwidth (in this hypothetical worst case, though).

It's hard to even try to measure the effect, so designing ways around it is even more vague. Or am I even chasing a ghost bottleneck that somehow the hardware optimizes better than I could?

I'm using Core II x86 processors if that makes any difference.

Edit: Here's some example code. It streams through an array and copies its elements to various output arrays picked pseudo-randomly. Running the same program with different numbers of destination bins gives different runtimes, even though the same amount of computation and memory reads and writes were done:

2 output streams: 13 secs
8 output streams: 13 secs
32 output streams: 19 secs
128 output streams: 29 seconds
512 output streams: 47 seconds

The difference between using 512 versus 2 output streams is 4X, (probably??) caused by cache line eviction overhead.

#include <stdio.h>
#include <stdlib.h>
#include <ctime>

int main()
{
  const int size=1<<19;
  int streambits=3;
  int streamcount=1UL<<streambits; // # of output bins
  int *instore=(int *)malloc(size*sizeof(int));
  int **outstore=(int **)malloc(streamcount*sizeof(int *));
  int **out=(int **)malloc(streamcount*sizeof(int));
  unsigned int seed=0;

  for (int j=0; j<size; j++) instore[j]=j;

  for (int i=0; i< streamcount; ++i) 
    outstore[i]=(int *)malloc(size*sizeof(int));

  int startTime=time(NULL);
  for (int k=0; k<10000; k++) {
    for (int i=0; i<streamcount; i++) out[i]=outstore[i];
    int *in=instore;

    for (int j=0; j<size/2; j++) {
      seed=seed*0x1234567+0x7162521;
      int bin=seed>>(32-streambits); // pseudorandom destination bin
      *(out[bin]++)=*(in++);
      *(out[bin]++)=*(in++);
    }

  }
  int endTime=time(NULL);
  printf("Eval time=%ld\n", endTime-startTime);
}
like image 607
SPWorley Avatar asked Apr 02 '09 11:04

SPWorley


People also ask

What is memory bandwidth good for?

The basics of memory bandwidth Memory bandwidth is basically the speed of the video RAM. It's measured in gigabytes per second (GB/s). The more memory bandwidth you have, the better. A video card with higher memory bandwidth can draw faster and draw higher quality images.

What is the average memory bandwidth?

Memory bus (interface) width: Each DDR, DDR2, or DDR3 memory interface is 64 bits wide. Those 64 bits are sometimes referred to as a "line." Number of interfaces: Modern personal computers typically use two memory interfaces (dual-channel mode) for an effective 128-bit bus width.

What does memory bandwidth mean GPU?

Overview. GPU memory bandwidth is a measure of the data transfer speed between a GPU and the system across a bus, such as PCI Express (PCIe) or Thunderbolt. It's important to consider the bandwidth of each GPU in a system when developing your high-performance Metal apps.

What is meant by memory bandwidth?

The maximum memory bandwidth is the maximum rate at which data can be read from or stored into a semiconductor memory by the processor (in GB/s).


1 Answers

As you're writing to the 64 output bins, you'll be using many different memory locations. If the bins are filled essentially at random, it means that you'll sometimes have two bins that couls share the same cache line. Not a big problem; the Core 2 L1 cache is 8-way associative. That means you'd get a problem only with the 9th cache line. With just 65 live memory references at any time (1 read/64 write), 8-way associative is OK.

The L2 cache is apparently 12-way associative (3/6MB total, so 12 isn't that weird a number). So even if you'd have collisions in L1, chances are pretty good you're still not hitting main memory.

However, if you don't like this, re-arrange the bins in memory. Instead of stroing each bin sequentially, interleave them. For bin 0, store chunks 0-15 at offsets 0-63, but store chunks 16-31 at offset 8192-8255. For bin 1, store chunks 0-15 at offsets 64-127, etcetera. This takes just a few bit shifts and masks, but the result is that a pair of bins share 8 cache lines.

Another possible way to speed up your code in this case is SSE4, especially in x64 mode. You'd get 16 registers x 128 bits, and you can optimize the read (MOVNTDQA) to limit cache pollution. I'm not sure if that will help a lot with the read speed, though - I'd expect the Core2 prefetcher to catch this. Reading sequential integers is the most simple kind of access possible, any prefetcher should optimize that.

like image 86
MSalters Avatar answered Dec 06 '22 14:12

MSalters