I have a scenario to discuss here for Peterson Algorithm:
flag[0] = 0;
flag[1] = 0;
turn;
P0: flag[0] = 1;
turn = 1;
while (flag[1] == 1 && turn == 1)
{
// busy wait
}
// critical section
...
// end of critical section
flag[0] = 0;
P1: flag[1] = 1;
turn = 0;
while (flag[0] == 1 && turn == 0)
{
// busy wait
}
// critical section
...
// end of critical section
flag[1] = 0;
Suppose both process start executing concurrently .P0 sets flag[0]=1 and die. Then P1 starts. Its while condition will be satisfied as flag[0]=1 ( set by P0 and turn =0) and it will stuck in this loop forever which is a dead lock.
So does Peterson Algorithm doesn't account for this case ?
In case if dying of process in not to be considered while analyzing such algorithms then in Operating System Book by William Stalling, appendix A contain a series of algorithm for concurrency, starting with 4 incorrect algorithm for demonstration. It proves them incorrect by considering the case of dying of a process ( in addition to other cases) before completion but then claims Peterson Algorithm to be correct.
I came across this thread which give me clue that there is a problem when considering N process( n>2) but for two process this algorithm works fine.
So is there a problem in the analysis of the algorithm(suggested by one of my classmate and i fully second him) as discussed above or Peterson Algorithm doesn't claim deadlock with 2 process too?
In this paper Some myths about famous mutual exclusion algorithms, the author concluded Peterson has never claimed that his algorithm assures bounded bypass.
Can unbounded bypass be thought of as reaching infinity as in case of deadlock ? Well in that case we can have less trouble in accepting that Peterson Algorithm can lead to deadlock.
Certainly you could write code that would throw an unhandled exception, but the algorithm supposes that the executing process will always set its flag to false, after its critical section has executed. Therefore Peterson's algorithm does pass the 3 tests for critical sections.
1) Mutual exclusion - flag[0] and flag[1] can both be true, but turn can only be 0 or 1. Therefore only one of the two critical sections can be executed. The other will spin wait.
2) Progress - If process 0 is in the critical section, then turn = 0 and flag[0] is true. Once it has completed it's critical section (even if something catastrophic happens), it must set flag[0] to false. If process 1 was spin waiting, it will now free as flag[0] != true.
3) Bound-waiting - If Process 1 is waiting, Process 0 can only enter it's critical section once, before process 1 is given the green light, as explained in #2.
The problem with Peterson's Algorithm is that in modern architecture, a CPU cache could screw up the mutual exclusion requirement. The problem is called cache-coherence, and it is possible that the cache used by Processs 0 on CPU 0 sets flag[0] equal to true, while Process 1 on CPU 1 still thinks flag[0] is false. In this case, both critical sections would enter, and BANG... mutual exclusion fails, and race conditions are now possible.
You are right, Peterson's algorithm assumes processes do not fail while executing the part of the algorithm for synchronizing. I do not have access to the book you mention, but perhaps the other algorithms are incorrect because they do not account for processes failing outside of the synchronization parts (which is worse)?
Note: while still interesting historically, Peterson's algorithm also makes assumptions on the way memory work that are not valid with today's hardware.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With