I understand the definitions of the terms, but I am having trouble applying their concepts to code. For an exercise, we are asked to describe if the following code is spatial or temporal:
for (int i=0; i<10; i++) {
printf(some_array[i]);
}
I feel like this is spatial locality because when one index of the array is accessed, the next index memory location will be accessed as soon as the loop iterates. Is this the correct way to look at it? What determines if code is temporal versus spatial? More examples would be great.
The principle of temporal locality says that if a program accesses one memory address, there is a good chance that it will access the same address again. Loops are excellent examples of temporal locality in programs.
Temporal locality is the tendency of programs to use data items over and again during the course of their execution. This is the founding principle behind caches and gives a clear guide to an appropriate data-management heuristic.
In Spatial Locality, nearby instructions to recently executed instruction are likely to be executed soon. In Temporal Locality, a recently executed instruction is likely to be executed again very soon. 2. It refers to the tendency of execution which involve a number of memory locations .
4.8 What is the distinction between spatial locality and temporal locality? Spatial locality refers to the tendency of execution to involve a number of memory locations that are clustered. Temporal locality refers to the tendency for a processor to access memory locations that have been used recently.
It's a bit of a silly exercise, really. Code is not temporal or spatial.
But temporal locality implies that you're going to access the same address multiple times, relatively closely in time. You're not doing that here (unless you count accessing i
, I guess), so by a process of elimination, you could conclude that this must be spatial locality.
More precisely, you're accessing some_array[0]
, then some_array[1]
, etc. etc. These are close together in address space, so yes, this may be "relying" on spatial locality.
In the context of hardware cache memory, which is where these concepts usually come up, the analysis is not usually done on a memory address basis, so to speak. Locality is analyzed by access to memory blocks, those which are transferred between cache and main memory.
If you think of it that way, your code has both temporal and spatial locality. When your code reads some_array[0]
, if its address is not found in the cache, it is read from main memory and the whole block which contains it is copied to the cache. It replaces some other block following a certain policy: MRU, for example.
Then, when you access some_array[1]
a short time later, its block is already in cache, so read time will be smaller. Notice that you accessed the same block, and in a small amount of time. So you have both spatial and temporal locality.
Cache memory takes advantage of spatial and temporal locality to provide faster memory access. On the other hand, whether your code can take advantage of this is a whole different issue altogether. Nevertheless, the compiler will do most of the optimizations for you, so you should only concern yourself with this after finding a bottleneck in a profile session. In Linux environments, Cachegrind is great for this.
This code has temporal locality in the instruction cache because you're repeating the code with each loop (assuming your optimizer did not unroll the loop). It also has spacial locality in the data cache because if you access array element i you will soon access elements i+1, i+2, etc. If your data cache line size is 16 bytes and your array is 32-bit integers, then your data cache also loaded elements 1, 2, and 3 when you asked for element 0 (assuming our array started at a cache line boundary).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With