We like to think that a memory access is fast and constant, but on modern architectures/OSes, that's not necessarily true.
Consider the following C code:
int i = 34;
int *p = &i;
// do something that may or may not involve i and p
{...}
// 3 days later:
*p = 643;
What is the estimated cost of this last assignment in CPU instructions, if
i
is in L1 cache,i
is in L2 cache,i
is in L3 cache,i
is in RAM proper,i
is paged out to an SSD disk,i
is paged out to a traditional disk?Where else can i
be?
Of course the numbers are not absolute, but I'm only interested in orders of magnitude. I tried searching the webs, but Google did not bless me this time.
Here's some hard numbers, demonstrating that exact timings vary from CPU family and version to version: http://www.agner.org/optimize/
These numbers are a good guide:
L1 1 ns
L2 5 ns
RAM 83 ns
Disk 13700000 ns
And as an infograph to give you the orders of magnitude:
(src http://news.ycombinator.com/item?id=702713)
Norvig has some values from 2001. Things have changed some since then but I think the relative speeds are still roughly correct.
It could also be in a CPU-register. The C/C++-keyword "register" tells the CPU to keep the variable in a register, but you can't guarantee it will stay or even ever get in there.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With