Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Can't understand Belady's anomaly

So Belady's Anomaly states that when using a FIFO page replacement policy, when adding more page space we'll have more page faults.

My intuition says that we should less or at most, the same number of page faults as we add more page space.

If we think of a FIFO queue as a pipe, adding more page space is like making the pipe bigger:

 ____ O____O  size 4   ________ O________O  size 8 

So, why would you get more page faults? My intuition says that with a longer pipe, you'd take a bit longer to start having page faults (so, with an infinite pipe you'd have no page faults) and then you'd have just as many page faults and just as often as with a smaller pipe.

What is wrong with my reasoning?

like image 456
devoured elysium Avatar asked Jan 26 '11 00:01

devoured elysium


People also ask

How do you overcome Belady's anomaly?

Implementing alternative page replacement algorithm helps eliminate Belady's Anomaly. Use of stack based algorithms, such as Optimal Page Replacement Algorithm and Least Recently Used (LRU) algorithm, can eliminate the issue of increased page faults as these algorithms assign priority to pages.

What do you understand by Belady's anomaly?

In computer storage, Bélády's anomaly is the phenomenon in which increasing the number of page frames results in an increase in the number of page faults for certain memory access patterns. This phenomenon is commonly experienced when using the first-in first-out (FIFO) page replacement algorithm.

Why does Belady's anomaly happen?

Generally, on increasing the number of frames to a process' virtual memory, its execution becomes faster as fewer page faults occur. Sometimes the reverse happens, i.e. more page faults occur when more frames are allocated to a process. This most unexpected result is termed Belady's Anomaly.

Which algorithm does not suffer from Belady's anomaly?

Stack based algorithms do not suffer from Belady's Anomaly. This is because these algorithms assign priority to a page for replacement that is independent of the number of frames in the main memory.


1 Answers

The reason that when using FIFO, increasing the number of pages can increase the fault rate in some access patterns, is because when you have more pages, recently requested pages can remain at the bottom of the FIFO queue longer.

Consider the third time that "3" is requested in the wikipedia example here: http://en.wikipedia.org/wiki/Belady%27s_anomaly

Page faults are marked with an "f".

1:

Page Requests   3    2    1    0    3    2    4    3    2    1    0    4 Newest Page     3f   2f   1f   0f   3f   2f   4f   4    4    1f   0f   0                      3    2    1    0    3    2    2    2    4    1    1 Oldest Page               3    2    1    0    3    3    3    2    4    4 

2:

Page Requests   3    2    1    0    3    2    4    3    2    1    0    4 Newest Page     3f   2f   1f   0f   0    0    4f   3f   2f   1f   0f   4f                      3    2    1    1    1    0    4    3    2    1    0                           3    2    2    2    1    0    4    3    2    1 Oldest Page                    3    3    3    2    1    0    4    3    2 

In the first example (with fewer pages), there are 9 page faults.

In the second example (with more pages), there are 10 page faults.

When using FIFO, increasing the size of the cache changes the order in which items are removed. Which in some cases, can increase the fault rate.

Belady's Anomaly does not state anything about the general trend of fault rates with respect to cache size. So your reasoning (about viewing the cache as a pipe), in the general case is not wrong.

In summary: Belady's Anomaly points out that it is possible to exploit the fact that larger cache sizes can cause items in the cache to be raised in the FIFO queue later than smaller cache sizes, in order to cause larger cache sizes to have a higher fault rate under a particular (and possibly rare) access pattern.

like image 95
Olhovsky Avatar answered Sep 21 '22 14:09

Olhovsky