Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is prefetching triggered by the stream of exact addresses or by the stream of cache lines?

On modern x86 CPUs, hardware prefetching is an important technique to bring cache lines into various levels of the cache hierarchy before they are explicitly requested by the user code.

The basic idea is that when the processor detects a series of accesses to sequential or strided-sequential1 locations, it will go ahead and fetch further memory locations in the sequence, even before executing the instructions that (may) actually access those locations.

My question is if the detection of a prefetch sequence is based on the full addresses (the actual addresses requested by user code) or the cache line addresses which is pretty much the address excluding the bottom 6 bits2 stripped off.

For example, on a system with a 64-bit cache line, accesses to full addresses 1, 2, 3, 65, 150 would access cache lines 0, 0, 0, 1, 2.

The difference could be relevant when a series of accesses is more regular in the cache line addressing than the full addressing. For example, a series of full addresses like:

32, 24, 8, 0, 64 + 32, 64 + 24, 64 + 8, 64 + 0, ..., N*64 + 32, N*64 + 24, N*64 + 8, N*64 + 0

might not look like a strided sequence at the full address level (indeed it might incorrectly trigger the backwards prefetcher since each subsequence of 4 accesses looks like an 8-byte strided reverse sequence), but at the cache line level it looks like its going forwards a cache line a time (just like the simple sequence 0, 8, 16, 24, ...).

Which system, if either, is in place on modern hardware?


Note: One could imagine also that the answer wouldn't be based on every access, but only accesses which miss in the some level of the cache that the prefetcher is observing, but then the same question still applies to the filtered stream of "miss accesses".


1Strided-sequential just means that accesses that have the same stride (delta) between them, even if that delta isn't 1. For example, a series of accesses to locations 100, 200, 300, ... could be detected as strided access with a stride of 100, and in principle the CPU will fetch based on this pattern (which would mean that some cache lines might be "skipped" in the prefetch pattern).

2 Here assuming a 64-bit cache line.

like image 612
BeeOnRope Avatar asked Dec 09 '17 19:12

BeeOnRope


People also ask

What is the difference between cache and prefetch?

Cache prefetching is a technique that reduces cache miss rate by fetching data from memory to a cache, ideally before the data has been demanded from the processor. The simplest hardware prefetcher is a Next-N-Line Prefetcher, which brings one or several cache blocks adjacent to the one that was not found in a cache.

What is meant by prefetching?

Prefetching in computer science is a technique for speeding up fetch operations by beginning a fetch operation whose result is expected to be needed soon. Usually this is before it is known to be needed, so there is a risk of wasting time by prefetching data that will not be used.

What is meant by prefetching in computer architecture?

Cache prefetching is a technique used by computer processors to boost execution performance by fetching instructions or data from their original storage in slower memory to a faster local memory before it is actually needed (hence the term 'prefetch').

Why is prefetch used?

The goal of prefetching is to make data available in the cache before the data consumer places its request, thereby masking the latency of the slower data source below the cache.


1 Answers

The cache line offsets can be useful but they also can be misleading as your example shows. I will discuss the how line offsets impact the data prefetchers on modern Intel processors based on my experiments on Haswell.

The method I followed is simple. First, I disable all the data prefetchers except the one I want to test. Second, I design a sequence of accesses that exhibit a particular pattern of interest. The target prefetcher will see this sequence and learn from it. Then I follow that by an access to a particular line to determine whether the prefetcher has prefetched that line or not by accurately measuring the latency. The loop doesn't contain any other loads. It contains though one store used to store the latency measurement in some buffer.

There are 4 hardware data prefetchers. The behaviors of the DCU prefetcher and the L2 adjacent line prefetcher are not affected by the pattern of the line offsets, but only by the pattern of 64-byte aligned addresses.

My experiments don't show any evidence that the L2 streaming prefetcher even receives the cache line offset. It seems that it only gets the line-aligned address. For example, by accessing the same line multiple times, the offset pattern by itself does not seem to have an impact on the behavior of the prefetcher.

The DCU IP prefetcher shows interesting behavior. I've tested two cases:

  • If a load has decreasing offsets, the prefetcher will prefetch one or more lines both in the forward and backward direction.
  • If a load has increasing offsets, the prefetcher will prefetch one or more lines but only in the forward direction.
like image 133
Hadi Brais Avatar answered Sep 27 '22 16:09

Hadi Brais