Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Some help scraping a page in Java

I need to scrape a web page using Java and I've read that regex is a pretty inefficient way of doing it and one should put it into a DOM Document to navigate it.

I've tried reading the documentation but it seems too extensive and I don't know where to begin.

Could you show me how to scrape this table in to an array? I can try figuring out my way from there. A snippet/example would do just fine too.

Thanks.

like image 882
Mridang Agarwalla Avatar asked Jan 02 '11 02:01

Mridang Agarwalla


2 Answers

You can try jsoup: Java HTML Parser. It is an excellent library with good sample codes.

like image 64
dsr Avatar answered Oct 18 '22 10:10

dsr


  1. Transform the web page you are trying to scrap into an XHTML document. There are several options to do this with Java, such as JTidy and HTMLCleaner. These tools will also automatically fix malformed HTML (e.g., close unclosed tags). Both work very well, but I prefer JTidy because it integrates better with Java's DOM API;
  2. Extract required information using XPath expressions.

Here is a working example using JTidy and the Web Page you provided, used to extract all file names from the table.

public static void main(String[] args) throws Exception {
    // Create a new JTidy instance and set options
    Tidy tidy = new Tidy();
    tidy.setXHTML(true); 

    // Parse an HTML page into a DOM document
    URL url = new URL("http://www.cs.grinnell.edu/~walker/fluency-book/labs/sample-table.html");        
    Document doc = tidy.parseDOM(url.openStream(), System.out);

    // Use XPath to obtain whatever you want from the (X)HTML
    XPath xpath = XPathFactory.newInstance().newXPath();
    XPathExpression expr = xpath.compile("//td[@valign = 'top']/a/text()");
    NodeList nodes = (NodeList)expr.evaluate(doc, XPathConstants.NODESET);
    List<String> filenames = new ArrayList<String>();
    for (int i = 0; i < nodes.getLength(); i++) {
        filenames.add(nodes.item(i).getNodeValue()); 
    }

    System.out.println(filenames);
}

The result will be [Integer Processing:, Image Processing:, A Photo Album:, Run-time Experiments:, More Run-time Experiments:] as expected.

Another cool tool that you can use is Web Harvest. It basically does everything I did above but using an XML file to configure the extraction pipeline.

like image 24
João Silva Avatar answered Oct 18 '22 08:10

João Silva