Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Common Lisp package for parsing invalid HTML? [closed]

As a learning exercise, I'm writing a web scraper in Common Lisp. The (rough) plan is:

  1. Use Quicklisp to manage dependencies
  2. Use Drakma to load the pages
  3. Parse the pages with xmls

I've just run into a sticking point: the website I'm scraping doesn't always produce valid XHTML. This means that step 3 (parse the pages with xmls) doesn't work. And I'm as loath to use regular expressions as this guy :-)

So, can anyone recommend a Common Lisp package for parsing invalid XHTML? I'm imagining something similar to the HTML Agility Pack for .NET ...

like image 273
Duncan Bayne Avatar asked Dec 06 '22 00:12

Duncan Bayne


2 Answers

The "closure-html" project (available in Quicklisp) will recover from bogus HTML and produce something with which you can work. I use closure-html together with CXML to process arbitrary web pages, and it works nicely. http://common-lisp.net/project/closure/closure-html/

like image 144
Xach Avatar answered Dec 14 '22 23:12

Xach


For next visitors: today we have Plump: https://shinmera.github.io/plump

Plump is a parser for HTML/XML like documents, focusing on being lenient towards invalid markup. It can handle things like invalid attributes, bad closing tag order, unencoded entities, inexistent tag types, self-closing tags and so on. It parses documents to a class representation and offers a small set of DOM functions to manipulate it. You are free to change it to parse to your own classes though.

and them we have other libs to query the document, like lquery (jquery-like) or CLSS (simple CSS selectors) by the same author.

We also now have a little tutorial on the Common Lisp Cookbook: https://lispcookbook.github.io/cl-cookbook/web-scraping.html

See also Common Lisp wiki: http://www.cliki.net/Web

like image 23
Ehvince Avatar answered Dec 15 '22 00:12

Ehvince