Which one is better and more useful for malformed html?
I cannot find how to use libxml2.
Thanks.
lxml provides a very simple and powerful API for parsing XML and HTML. It supports one-step parsing as well as step-by-step parsing using an event-driven API (currently only for XML).
While libxml2 (and thus lxml) can also parse broken HTML, BeautifulSoup is a bit more forgiving and has superiour support for encoding detection. lxml can benefit from the parsing capabilities of BeautifulSoup through the lxml. html.
It is not uncommon that lxml/libxml2 parses and fixes broken HTML better, but BeautifulSoup has superiour support for encoding detection. It very much depends on the input which parser works better. In the end they are saying, The downside of using this parser is that it is much slower than the HTML parser of lxml.
lxml is a Python library which allows for easy handling of XML and HTML files, and can also be used for web scraping. There are a lot of off-the-shelf XML parsers out there, but for better results, developers sometimes prefer to write their own XML and HTML parsers.
In the libxml2 page you can see this note:
Note that some of the Python purist dislike the default set of Python bindings, rather than complaining I suggest they have a look at lxml the more pythonic bindings for libxml2 and libxslt and check the mailing-list.
and in the lxml page this other one:
The lxml XML toolkit is a Pythonic binding for the C libraries libxml2 and libxslt. It is unique in that it combines the speed and XML feature completeness of these libraries with the simplicity of a native Python API, mostly compatible but superior to the well-known ElementTree API.
So essentially, with lxml
you get exactly the same functionality,
but with a a pythonic API compatible with the ElementTree
library in the standard library (so this means the standard library documentation will be useful to learn how to use lxml
). That's why, lxml
is preferred over libxml2
(even when the underlying implementation is the same one).
Edit: Having said that, as other answers explain, to parse malformed html your best option is to use BeautifulSoup
. One interesting thing to note is that, if you have installed lxml
, BeautifulSoup
will use it as explained in the documentation for the new version:
If you don’t specify anything, you’ll get the best HTML parser that’s installed. Beautiful Soup ranks lxml’s parser as being the best, then html5lib’s, then Python’s built-in parser.
Anyway, even if BeautifulSoup
uses lxml
under the hood, you'll be able to parse broken html
that you can't parse with xml
directly. For example:
>>> lxml.etree.fromstring('<html>')
...
XMLSyntaxError: Premature end of data in tag html line 1, line 1, column 7
However:
>>> bs4.BeautifulSoup('<html>', 'lxml')
<html></html>
Finally, note that lxml
also provides an interface to the old version of BeautifulSoup
as follows:
>>> lxml.html.soupparser.fromstring('<html>')
<Element html at 0x13bd230>
So at the end of the day, you'll probably be using lxml
and BeautifulSoup
anyway. The only thing you've got to choose is what's the API that you like the most.
Try beutifulsoup instead. It is aimed at parsing poorly structured data.
http://pypi.python.org/pypi/BeautifulSoup
http://lxml.de/elementsoup.html
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With