Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why is bottom-up parsing more common than top-down parsing?

It seems that recursive-descent parsers are not only the simplest to explain, but also the simplest to design and maintain. They aren't limited to LALR(1) grammars, and the code itself can be understood by mere mortals. In contrast, bottom up parsers have limits on the grammars they are able to recognize, and need to be generated by special tools (because the tables that drive them are next-to-impossible to generate by hand).

Why then, is bottom-up (i.e. shift-reduce) parsing more common than top-down (i.e. recursive descent) parsing?

like image 888
Billy ONeal Avatar asked Nov 30 '10 17:11

Billy ONeal


People also ask

Why bottom up parser is more powerful?

The LR parser is a non-recursive, shift-reduce, bottom-up parser. It uses a wide class of context-free grammar which makes it the most efficient syntax analysis technique.

Which parser is better top down or bottom up?

Top-down and Bottom-up Parsing Comparison ChartTop-Down Parsing uses Left Most Derivation. Bottom-Up Parsing uses Right Most Derivation. It is less powerful compared to Bottom-up parsing. It is more powerful compared to Top-down parsing.

What kind of parser top down or bottom up is most common in production compilers?

Recursive descent parsing : It is a common form of top-down parsing.

What is the main disadvantage of top-down parsing why?

Disadvantages of top-down parsing: - Wastes time on trees that don't match the input (compare the first word of the input with the leftmost branch of the tree). Bottom-up parsing doesn't do this.


2 Answers

If you choose a powerful parser generator, you can code your grammar without worrying about peculiar properties. (LA)LR means you don't have to worry about left recursion, one less headache. GLR means you don't have to worry about local ambiguity or lookahead.

And the bottom-up parsers tend to be pretty efficient. So, once you've paid the price of a bit of complicated machinery, it is easier to write grammars and the parsers perform well.

You should expect to see this kind of choice wherever there is some programming construct that commonly occurs: if it is easier to specify, and it performs pretty well, even if the machinery is complicated, complex machinery will win. As another example, the database world has gone to relational tools, in spite of the fact that you can hand-build an indexed file yourself. It's easier to write the data schemas, it's easier to specify the indexes, and with complicated enough machinery behind (you don't have to look at the gears, you just use them), they can be pretty fast with almost no effort. Same reasons.

like image 139
Ira Baxter Avatar answered Sep 28 '22 07:09

Ira Baxter


It stems from a couple different things.

BNF (and the theory of grammars and such) comes from computational linguistics: folks researching natural language parsing. BNF is a very attractive way of describing a grammar, and so it's natural to want to consume these notation to produce a parser.

Unfortunately, top-down parsing techniques tend to fall over when applied to such notations, because they cannot handle many common cases (e.g., left recursion). This leaves you with the LR family, which performs well and can handle the grammars, and since they're being produced by a machine, who cares what the code looks like?

You're right, though: top-down parsers work more "intuitively," so they're easier to debug and maintain, and once you have a little practice they're just as easy to write as those generated by tools. (Especially when you get into shift/reduce conflict hell.) Many of the answers talk about parsing performance, but in practice top-down parsers can often be optimized to be as fast as machine-generated parsers.

That's why many production compilers use hand-written lexers and parsers.

like image 35
John Doty Avatar answered Sep 28 '22 08:09

John Doty