Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do you parse a poorly formatted HTML file?

I have to parse a series of web pages in order to import data into an application. Each type of web page provides the same kind of data. The problem is that the HTML of each page is different, so the location of the data varies. Another problem is that the HTML code is poorly formatted, making it impossible to use a XML-like parser.

So far, the best strategy I can think of, is to define a template for each kind of page, like:

Template A:

<html>
...
  <tr><td>Table column that is missing a td 
      <td> Another table column</td></tr>
  <tr><td>$data_item_1$</td>
...
</html>

Template B:

<html>
...
  <ul><li>Yet another poorly formatted page <li>$data_item_1$</td></tr>
...
</html>

This way I would only need one single parser for all the pages, that would compare each page with its template and retrieving the $data_item_1$, $data_item_2$, etc. Still, it is going to be a lot of work. Can you think of any simpler solution? Any library that can help?

Thanks

like image 312
ivo Avatar asked Apr 02 '09 17:04

ivo


2 Answers

You can pass the page's source through tidy to get a valid page. You can find tidy here . Tidy has bindings for a lot of programming languages. After you've done this, you can use your favorite parser/content extraction technique.

like image 179
Geo Avatar answered Nov 12 '22 01:11

Geo


I'd recommend Html Agility Pack. It has the ability to work with poorly structured HTML while giving you Xml like selection using Xpath. You would still have to template items or select using different selections and analyze but it will get you past the poor structure hump.

like image 37
Pat Avatar answered Nov 12 '22 01:11

Pat