As a learning exercise, I'm writing a web scraper in Common Lisp. The (rough) plan is:
I've just run into a sticking point: the website I'm scraping doesn't always produce valid XHTML. This means that step 3 (parse the pages with xmls) doesn't work. And I'm as loath to use regular expressions as this guy :-)
So, can anyone recommend a Common Lisp package for parsing invalid XHTML? I'm imagining something similar to the HTML Agility Pack for .NET ...
The "closure-html" project (available in Quicklisp) will recover from bogus HTML and produce something with which you can work. I use closure-html together with CXML to process arbitrary web pages, and it works nicely. http://common-lisp.net/project/closure/closure-html/
For next visitors: today we have Plump: https://shinmera.github.io/plump
Plump is a parser for HTML/XML like documents, focusing on being lenient towards invalid markup. It can handle things like invalid attributes, bad closing tag order, unencoded entities, inexistent tag types, self-closing tags and so on. It parses documents to a class representation and offers a small set of DOM functions to manipulate it. You are free to change it to parse to your own classes though.
and them we have other libs to query the document, like lquery (jquery-like) or CLSS (simple CSS selectors) by the same author.
We also now have a little tutorial on the Common Lisp Cookbook: https://lispcookbook.github.io/cl-cookbook/web-scraping.html
See also Common Lisp wiki: http://www.cliki.net/Web
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With