In modern computers I know that the current code region is in a high-speed cache. However, in many computer language implementations the local (automatic) variables will be on the stack, so there will be a lot of memory accesses to the stack.
Is the stack in another cache in normal architectures?
If, not, suppose the stack could be relocated to be "local", ie very near to the current code segment, so that it would be within the cache being used for the code. Would this speed up memory access for automatic variables?
Most modern computers will cache memory indiscriminately, so whatever memory is accessed by the CPU will get cached. So, that includes the memory including the stack. In fact, the stack will be highly likely to be cached nearly all the time because it is frequently accessed.
The main concern with having the stack cached is that because it is modified so frequently, it can cause the cache to be out of sync with main memory and trigger a cache miss, which would damage performance. This generally does not happen because most architectures use what is called write back cache, which means that each block of cache, called the cache line, has a dirty bit and uses deferred copy of the cache back to main memory when required. This usually will result in relatively rare main memory updates and performance will be good.
Performance problems can arise, however, if two different threads attempt to write to the same memory. This can cause a coherence miss, resulting in a cache write and update to memory and lower performance. Normally, software is not structured such that two different threads would use the same stack. However, if this were to occur, then performance could be poor on some architectures because the conflicting stack accesses by the different threads would cause frequent cache misses.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With