Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Which of lxml and libxml2 is better for parsing malformed html in Python?

Which one is better and more useful for malformed html?
I cannot find how to use libxml2.

Thanks.

like image 594
bloody numen Avatar asked Feb 17 '12 07:02

bloody numen


People also ask

Can lxml parse HTML?

lxml provides a very simple and powerful API for parsing XML and HTML. It supports one-step parsing as well as step-by-step parsing using an event-driven API (currently only for XML).

Is lxml better than BeautifulSoup?

While libxml2 (and thus lxml) can also parse broken HTML, BeautifulSoup is a bit more forgiving and has superiour support for encoding detection. lxml can benefit from the parsing capabilities of BeautifulSoup through the lxml. html.

Is lxml faster than BeautifulSoup?

It is not uncommon that lxml/libxml2 parses and fixes broken HTML better, but BeautifulSoup has superiour support for encoding detection. It very much depends on the input which parser works better. In the end they are saying, The downside of using this parser is that it is much slower than the HTML parser of lxml.

What is lxml used for in Python?

lxml is a Python library which allows for easy handling of XML and HTML files, and can also be used for web scraping. There are a lot of off-the-shelf XML parsers out there, but for better results, developers sometimes prefer to write their own XML and HTML parsers.


2 Answers

In the libxml2 page you can see this note:

Note that some of the Python purist dislike the default set of Python bindings, rather than complaining I suggest they have a look at lxml the more pythonic bindings for libxml2 and libxslt and check the mailing-list.

and in the lxml page this other one:

The lxml XML toolkit is a Pythonic binding for the C libraries libxml2 and libxslt. It is unique in that it combines the speed and XML feature completeness of these libraries with the simplicity of a native Python API, mostly compatible but superior to the well-known ElementTree API.

So essentially, with lxml you get exactly the same functionality, but with a a pythonic API compatible with the ElementTree library in the standard library (so this means the standard library documentation will be useful to learn how to use lxml). That's why, lxml is preferred over libxml2 (even when the underlying implementation is the same one).

Edit: Having said that, as other answers explain, to parse malformed html your best option is to use BeautifulSoup. One interesting thing to note is that, if you have installed lxml, BeautifulSoup will use it as explained in the documentation for the new version:

If you don’t specify anything, you’ll get the best HTML parser that’s installed. Beautiful Soup ranks lxml’s parser as being the best, then html5lib’s, then Python’s built-in parser.

Anyway, even if BeautifulSoup uses lxml under the hood, you'll be able to parse broken html that you can't parse with xml directly. For example:

>>> lxml.etree.fromstring('<html>')
...
XMLSyntaxError: Premature end of data in tag html line 1, line 1, column 7

However:

>>> bs4.BeautifulSoup('<html>', 'lxml')
<html></html>

Finally, note that lxml also provides an interface to the old version of BeautifulSoup as follows:

>>> lxml.html.soupparser.fromstring('<html>')
<Element html at 0x13bd230>

So at the end of the day, you'll probably be using lxml and BeautifulSoup anyway. The only thing you've got to choose is what's the API that you like the most.

like image 54
jcollado Avatar answered Sep 21 '22 17:09

jcollado


Try beutifulsoup instead. It is aimed at parsing poorly structured data.

http://pypi.python.org/pypi/BeautifulSoup

http://lxml.de/elementsoup.html

like image 30
John P Avatar answered Sep 22 '22 17:09

John P