I started my recursive descent parser, and so far it works just fine. It returns "ACCEPT" or "REJECT" after parsing the input. But, I see online and in another text book that that they are "Using a PDA for a Top-Down Parse". So, I am just wanting confirmation that this is just ANOTHER way of coding the parser and not THE way. My parser looks like this:
public class Parser {
private int n = 0;
private Tokens token = Main.TokenList.get(n);
public Parser() {
Boolean bool = parse_Program();
if (bool){
System.out.println("ACCEPT");
}else System.out.println("REJECT");
}
Boolean parse_Program(){
if (!parse_DeclarationList()){
return false;
}
return true;
}
private boolean parse_DeclarationList() {
if (!parse_Declaration()){
return false;
}
if(!parse_DeclarationListPrime()){
return false;
}
return true;
}
private boolean parse_DeclarationListPrime() {
if (token.getContents().equals("int") || token.getContents().equals("void") || token.getContents().equals("float")) {
if (!parse_Declaration()) {
return false;
}else return true;
}else if (token.getContents().equals("$")){
return true;
}
return false;
}
private boolean parse_Declaration() {
if (!parse_TypeSpecifier()){
return false;
}
if (token.getType().equals("ID")){
Accept();
}else return false;
if (!parse_DDD()){
return false;
}
return true;
}
private boolean parse_DDD() {
if (token.getContents().equals("(")){
Accept();
if(!parse_params()){
return false;
}
if (token.getContents().equals(")")){
Accept();
if (!parse_compoundStmt()){
return false;
}else return true;
}
}else if (token.getContents().equals(";") || token.getContents().equals("[")){
if (!parse_varDeclarationPrime()){
return false;
}else return true;
}
return false;
}
private boolean parse_compoundStmt() {
if (token.getContents().equals("{")){
Accept();
if (!parse_localDeclarations()){
return false;
}
if (token.getContents().equals("}")){
Accept();
return true;
}
}
return false;
}
private boolean parse_localDeclarations() {
if (!parse_localDeclarationsPrime()){
return false;
}else return true;
}
private boolean parse_localDeclarationsPrime() {
if (!parse_varDeclaration()){
return false;
}
return true;
}
private boolean parse_params() {
if (token.getContents().equals("int") || token.getContents().equals("void") || token.getContents().equals("float")) {
if (getNextToken().getContents().equals(")") && token.getContents().equals("void")) {
Accept();
return true;
} else {
if (!parse_paramList()) {
return false;
} else return true;
}
}
return false;
}
private Tokens getNextToken() {
Tokens nextToken = Main.TokenList.get(n+1);
return nextToken;
}
private boolean parse_paramList() {
if (!parse_param()){
return false;
}
if (!parse_paramListPrime()){
return false;
}
return true;
}
private boolean parse_paramListPrime() {
if (token.getContents().equals(",")){
Accept();
if (!parse_param()){
return false;
}
if (!parse_paramListPrime()){
return false;
}
return true;
}else if (token.getContents().equals(")")){
return true;
}
return false;
}
private boolean parse_param() {
if (token.getContents().equals("int") || token.getContents().equals("void") || token.getContents().equals("float")){
Accept();
if (token.getType().equals("ID")){
Accept();
if (!parse_paramPrime()){
return false;
}else return true;
}
}
return false;
}
private boolean parse_paramPrime() {
if (token.getContents().equals("[")){
Accept();
if (token.getContents().equals("]")){
Accept();
return true;
}
}else if (token.getContents().equals(")")){
return true;
}
return false;
}
private boolean parse_varDeclaration() {
if (!parse_TypeSpecifier()){
return false;
}
if (token.getType().equals("ID")){
Accept();
}else
return false;
if (!parse_varDeclarationPrime()){
return false;
}
return true;
}
private boolean parse_varDeclarationPrime() {
if (token.getContents().equals(";")){
Accept();
return true;
}else if (token.getContents().equals("[")){
Accept();
if (token.getType().equals("NUM")){
Accept();
if (token.getContents().equals("]")){
Accept();
if (token.getContents().equals(";")){
Accept();
return true;
}
}
}
}
return false;
}
private void Accept() {
try {
if ((n + 1) <= Main.TokenList.size() - 1) {
token = Main.TokenList.get(++n);
} else {
token.setContents("$");
}
}catch (Exception e){
}
}
private boolean parse_TypeSpecifier() {
if (token.getContents().equals("int") || token.getContents().equals("float") || token.getContents().equals("void")){
Accept();
return true;
}
return false;
}
This is from the textbook: 
Top-down parsing requires some kind of parser stack. The point of recursive descent parsing is to use the call stack as a parser stack. So you should consider the algorithm in your text book to be an alternative to recursive descent.
Recursive descent is a popular parsing solution, but it can be problematic in a language like C (or C++) which does not detect stack overflow (or does not reliably detect it). In that case, you would probably want to use a dedicated parser stack for a production parser, particularly if:
you run the parser in a multithreaded app (so that stacks are small);
whatever you are parsing is expected to have deep nesting (perhaps its a dendritic data structure);
you are parsing unverified input and you are worried about attackers supplying an overly nested input;
you haven't worked hard enough to minimize the size of a stack frame, perhaps because you are allergic to dynamic memory allocation.
In general, you should avoid recursive descent and use an explicit stack in any context where the expected maximum call stack depth is likely to exceed the available stack size.
Alternatively, you could use a parser generator which produces parsers with controlled stack size.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With