I've an html table like this:
<TABLE>
<TR>
    <TD><P>Name</P></TD>
    <TD><P>Fees</P></TD>
    <TD><P>Awards</P></TD>
    <TD><P>Total</P></TD>
</TR>
<TR>
    <TD><P>Tony</P></TD>
    <TD >7,800</TD>
    <TD >7</TD>
    <TD>15,400</TD>
</TR>
<TR>
    <TD><P>Paul</FONT></P></TD>
    <TD >7,800</TD>
    <TD >7</TD>
    <TD>15,400</TD>
</TR>
<TR>
    <TD><P>Richard</P></TD>
    <TD >7,800</TD>
    <TD >7</TD>
    <TD>15,400</TD>
</TR>
</TR>
</TABLE>
I want to extract the values of table. I'd tried the following.
import lxml.html
html = lxml.html.parse(''html_table)
text_value = html.xpath('//tr/td/text()')
packages = html.xpath('//tr/td/p')
p_content = [p.text_content() for p in packages]
is there any way to extract both the <p> text and the text of <td> to a single list ?
lxml provides a very simple and powerful API for parsing XML and HTML. It supports one-step parsing as well as step-by-step parsing using an event-driven API (currently only for XML).
lxml is also a similar parser but driven by XML features than HTML. It has dependency on external C libraries. It is faster as compared to html5lib. Lets observe the difference in behavior of these two parsers by taking a sample tag example and see the output.
You could do something like
>>> doc = """<TABLE>
... <TR>
...     <TD><P>Name</P></TD>
...     <TD><P>Fees</P></TD>
...     <TD><P>Awards</P></TD>
...     <TD><P>Total</P></TD>
... </TR>
... <TR>
...     <TD><P>Tony</P></TD>
...     <TD >7,800</TD>
...     <TD >7</TD>
...     <TD>15,400</TD>
... </TR>
... <TR>
...     <TD><P>Paul</FONT></P></TD>
...     <TD >7,800</TD>
...     <TD >7</TD>
...     <TD>15,400</TD>
... </TR>
... <TR>
...     <TD><P>Richard</P></TD>
...     <TD >7,800</TD>
...     <TD >7</TD>
...     <TD>15,400</TD>
... </TR>
... 
... </TR>
... </TABLE>"""
>>> import lxml.html
>>> root = lxml.html.fromstring(doc)
>>> root.xpath('//tr/td//text()')
['Name', 'Fees', 'Awards', 'Total', 'Tony', '7,800', '7', '15,400', 'Paul', '7,800', '7', '15,400', 'Richard', '7,800', '7', '15,400']
>>> 
If you have 2 tables in document, you can first loop on tables and then use a relative XPath expression (with a leading .) for descendant text nodes on each table
>>> doc = """<TABLE>
... <TR>
...     <TD><P>Name</P></TD>
...     <TD><P>Fees</P></TD>
...     <TD><P>Awards</P></TD>
...     <TD><P>Total</P></TD>
... </TR>
... <TR>
...     <TD><P>Tony</P></TD>
...     <TD >7,800</TD>
...     <TD >7</TD>
...     <TD>15,400</TD>
... </TR>
... <TR>
...     <TD><P>Paul</FONT></P></TD>
...     <TD >7,800</TD>
...     <TD >7</TD>
...     <TD>15,400</TD>
... </TR>
... <TR>
...     <TD><P>Richard</P></TD>
...     <TD >7,800</TD>
...     <TD >7</TD>
...     <TD>15,400</TD>
... </TR>
... 
... </TR>
... </TABLE>
... <TABLE>
... <TR>
...     <TD><P>Name</P></TD>
...     <TD><P>Fees</P></TD>
...     <TD><P>Awards</P></TD>
...     <TD><P>Total</P></TD>
... </TR>
... <TR>
...     <TD><P>Tony</P></TD>
...     <TD >7,800</TD>
...     <TD >7</TD>
...     <TD>15,400</TD>
... </TR>
... <TR>
...     <TD><P>Paul</FONT></P></TD>
...     <TD >7,800</TD>
...     <TD >7</TD>
...     <TD>15,400</TD>
... </TR>
... <TR>
...     <TD><P>Richard</P></TD>
...     <TD >7,800</TD>
...     <TD >7</TD>
...     <TD>15,400</TD>
... </TR>
... 
... </TR>
... </TABLE>"""
>>> import lxml.html
>>> root = lxml.html.fromstring(doc)
>>> root.xpath('//tr/td//text()')
['Name', 'Fees', 'Awards', 'Total', 'Tony', '7,800', '7', '15,400', 'Paul', '7,800', '7', '15,400', 'Richard', '7,800', '7', '15,400', 'Name', 'Fees', 'Awards', 'Total', 'Tony', '7,800', '7', '15,400', 'Paul', '7,800', '7', '15,400', 'Richard', '7,800', '7', '15,400']
>>> for tbl in root.xpath('//table'):
...     elements = tbl.xpath('.//tr/td//text()')
...     print elements
... 
['Name', 'Fees', 'Awards', 'Total', 'Tony', '7,800', '7', '15,400', 'Paul', '7,800', '7', '15,400', 'Richard', '7,800', '7', '15,400']
['Name', 'Fees', 'Awards', 'Total', 'Tony', '7,800', '7', '15,400', 'Paul', '7,800', '7', '15,400', 'Richard', '7,800', '7', '15,400']
>>> 
                        If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With