I'm trying to learn some aspects of the Chomsky Hierarchy which are related to programming languages, and i still have to read the Dragon Book.
I've read that most programming languages can be parsed as a context free grammar (CFG). In term of computational power, it equals the one of a pushdown non deterministic automaton. Am I right?
If it's true, then how could a CFG hold an unrestricted grammar (UG), which is turing complete? I'm asking because, even if programming languages are described by CFGs, they are actually used to describe turing machines, and so via an UG.
I think that's because of at least two different levels of computing, the first, which is the parsing of a CFG focuses on the syntax related to the structure ( representation ? ) of the language, while the other focuses on the semantic ( sense, interpretation of the data itself ? ) related to the capabilities of the programming language which is turing complete. Again, are these assumptions right?
Chomsky's work on the syntax of languages coincided neatly with the early development of programming languages and thus his work found ready application to the more formal style of artificial language than those of his original interest--natural languages.
In its classical formulation [3], this so-called Chomsky hierarchy has four levels of increasing complexity: regular, context-free, context-sensitive and computably enumerable languages.
Chomsky Hierarchy represents the class of languages that are accepted by the different machine. The category of language in Chomsky's Hierarchy is as given below: Type 0 known as Unrestricted Grammar. Type 1 known as Context Sensitive Grammar. Type 2 known as Context Free Grammar.
I've read that most programming languages can be parsed as a context free grammar (CFG). In term of computational power, it equals the one of a pushdown non deterministic automaton. Am I right?
Technically yes. Usefully, no.
There are at least two useful ways to think about these questions:
The difficulty is that while most programming languages have an underlying structure that is easily described by a context-free grammar (Tcl being an interesting exception), many sentences that are described by the context-free grammar are not actually "in the language," where by "in the language" I mean "a valid program in the language in question." These extra sentences are usually ruled out by some form of static semantics. For example, the following utterance is a sentence in the context-free grammar of C programs but it is not itself in the set of valid C programs:
int f(void) { return n + 1; }
The problem here is that n
is not in scope. C requires "declaration before use", and that property cannot be expressed using a context-free grammar.
A typical decision procedure for a real programming language is part of the front end of a compiler or interpreter, and it has at least two parts: one, the parser, is equivalent in decision power to a pushdown automaton; but the second does additional checks which rule out many utterances as invalid. If these checks require any kind of definition-before-use property, they can't be done by a pushdown automaton or context-free grammar.
If it's true, then how could a CFG hold an unrestricted grammar (UG), which is turing complete?
A CFG doesn't "hold" anything—it simply describes a language.
... even if programming languages are described by CFGs, they are actually used to describe turing machines, and so via an UG.
You're skipping past some important levels of indirection here.
I think that's because of at least two different levels of computing, the first, which is the parsing of a CFG focuses on the syntax related to the structure ( representation ? ) of the language, while the other focuses on the semantic ( sense, interpretation of the data itself ? ) related to the capabilities of the programming language which is turing complete. Again, are these assumptions right?
They seem a bit muddled to me, but you're on the right track. A key question is "what's the difference between a language and a programming language?" The answer is that a programming language has a computational interpretation. Computational interpretations come in many fine varieties, and not all of them are Turing-complete. But the magic is in the interpretation, not in the syntax, so the Chomsky hierarchy is not very relevant here.
To prove my point, an extreme example: the regular language [1-9][0-9]*
is Turing-complete under the following interpretation:
Therefore the language of integer literals is Turing-complete.
If your head doesn't hurt now, it should.
This is absolutely not true. Most programming languages have a syntax that can be described by a CFG or BNG, but conforming to the syntax does not guarantee a legal program. There are all sorts of extra conditions such as "variables must be declared before use" or "the types in this expression must be combined in a legal way" that are not covered by the grammar, and that is what makes the languages non-context-free. (This is a bit like XML, which has a formally verifiable definition, but usually also extra constraints that a parser cannot verify.)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With