Left recursion will make the parser go into an infinite loop. So why does the same not happen with right recursion?
In a recursive descent parser a grammar rule like A -> B C | D
is implemented by trying to parse B at the current position and then, if that succeeds, trying to parse C at the position where B ended. If either fail, we try to parse D at the current position¹.
If C is equal to A (right recursion) that's okay. That simply means that if B succeeds, we try to parse A at the position after B, which means that we try to first parse B there and then either try A again at a new position or try D. This will continue until finally B fails and we try D.
If B is equal to A (left recursion), however, that is very much a problem. Because now to parse A, we first try to parse A at the current position, which tries to parse A at the current position ... ad infinitum. We never advance our position and never try anything except A (which just keeps trying itself), so we never get to a point where we might terminate.
¹ Assuming full back tracking. Otherwise A might fail without trying D if B and/or C consumed any tokens (or more tokens than we've got lookahead), but none of this matters for this discussion.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With