I am investigating writing a cache, and I found several references to cache sizes being a prime number.
E.g.
Maximum number of objects must be a prime number for the cache count values. Range value is from 3 to a maximum number that is logical for the task and that does not affect performance. Non-prime numbers are automatically rounded up to the next higher prime number. If the number fails, the default value will be used.
https://www.ibm.com/support/knowledgecenter/SSPREK_7.0.0/com.ibm.isam.doc_70/ref_cache_size_appl.html
Let's say we have a cache with m entries and a usage pattern which results in a striding behavior where it hits every nth entry for some n. The pattern will wrap around the end and hit some entries again while there are still empty slots. For example, if the cache has size 10 and a usage pattern hits every 6th entry, it will hit with the following sequence (if it starts with 0): 0, 6, 2, 8, 4, 0, 6, 2, 8, 4, ... So half the cache would go unused. In the general case, it's accurate to say that striding behavior like this will result in 1/GCD(n,m) of the rows being used and the others being left empty. So we only get full utilization in the presence of striding behavior if GCM(n,m) is 1. Making the cache have a prime number size makes this very likely. It only fails if there is striding where n=m or n is some multiple of m which for prime numbers which aren't small, is fairly unlikely.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With