Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Emulation of lex like functionality in Perl or Python

Here's the deal. Is there a way to have strings tokenized in a line based on multiple regexes?

One example:

I have to get all href tags, their corresponding text and some other text based on a different regex. So I have 3 expressions and would like to tokenize the line and extract tokens of text matching every expression.

I have actually done this using flex (not to be confused with Adobe), which is an implementation of the good old lex. lex provides an elegant way to do this by executing "actions" based on expressions. One can control the way lex reading a file too (block / line based read).

The problem is that flex actually produces C/ C++ code which actually does the tokenizing job. I have a make file which wraps all these things. I was wondering if perl /python can in some way do the same thing. Its just that I would like to do everything I like in a single programming language itself.

Tokenizing is just one of the things that I want to do as part of my application.

Apart from perl or python can any language (functional also) do this?

I did read about PLY and ANTLR here (Parsing, where can I learn about it).

But is there a way to do it naturally in python itself? pardon my ignorance, but are these tools used in any popular products / services?

Thank you.

like image 964
prabhu Avatar asked Nov 28 '22 16:11

prabhu


2 Answers

Look at documentation for following modules on CPAN

HTML::TreeBuilder

HTML::TableExtract

and

Parse::RecDescent

I've used these modules to process quite large and complex web-pages.

like image 124
slashmais Avatar answered Dec 10 '22 03:12

slashmais


If you're specifically after parsing links out of web-pages, then Perl's WWW::Mechanize module will figure things out for you in a very elegant fashion. Here's a sample program that grabs the first page of Stack Overflow and parses out all the links, printing their text and corresponding URLs:

#!/usr/bin/perl
use strict;
use warnings;
use WWW::Mechanize;

my $mech = WWW::Mechanize->new;

$mech->get("http://stackoverflow.com/");

$mech->success or die "Oh no! Couldn't fetch stackoverflow.com";

foreach my $link ($mech->links) {
    print "* [",$link->text, "] points to ", $link->url, "\n";
}

In the main loop, each $link is a WWW::Mechanize::Link object, so you're not just constrained to getting the text and URL.

All the best,

Paul

like image 26
pjf Avatar answered Dec 10 '22 01:12

pjf