Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to isolate a single element from a scraped web page in R

I want to use R to scrape this page: (http://www.fifa.com/worldcup/archive/germany2006/results/matches/match=97410001/report.html ) and others, to get the goal scorers and times.

So far, this is what I've got:

require(RCurl)
require(XML)

theURL <-"http://www.fifa.com/worldcup/archive/germany2006/results/matches/match=97410001/report.html"
webpage <- getURL(theURL, header=FALSE, verbose=TRUE) 
webpagecont <- readLines(tc <- textConnection(webpage)); close(tc)  

pagetree <- htmlTreeParse(webpagecont, error=function(...){}, useInternalNodes = TRUE)

and the pagetree object now contains a pointer to my parsed html (I think). The part I want is:

<div class="cont")<ul>
<div class="bold medium">Goals scored</div>
        <li>Philipp LAHM (GER) 6', </li>
        <li>Paulo WANCHOPE (CRC) 12', </li>
        <li>Miroslav KLOSE (GER) 17', </li>
        <li>Miroslav KLOSE (GER) 61', </li>
        <li>Paulo WANCHOPE (CRC) 73', </li>
        <li>Torsten FRINGS (GER) 87'</li>
</ul></div>

But I'm now lost as to how to isolate them, and frankly xpathSApply and xpathApply confuse the beejeebies out of me!

So, does anyone know how to formulate a command to suck out the element contained within the <div class="cont"> tags?

like image 383
PaulHurleyuk Avatar asked Jun 08 '10 15:06

PaulHurleyuk


1 Answers

These questions are very helpful when dealing with web scraping and XML in R:

  1. Scraping html tables into R data frames using the XML package
  2. How to transform XML data into a data.frame?

With regards to your particular example, while I'm not sure what you want the output to look like, this gets the "goals scored" as a character vector:

theURL <-"http://www.fifa.com/worldcup/archive/germany2006/results/matches/match=97410001/report.html"
fifa.doc <- htmlParse(theURL)
fifa <- xpathSApply(fifa.doc, "//*/div[@class='cont']", xmlValue)
goals.scored <- grep("Goals scored", fifa, value=TRUE)

The xpathSApply function gets all the values that match the given criteria, and returns them as a vector. Note how I'm looking for a div with class='cont'. Using class values is frequently a good way to parse an HTML document because they are good markers.

You can clean this up however you want:

> gsub("Goals scored", "", strsplit(goals.scored, ", ")[[1]])
[1] "Philipp LAHM (GER) 6'"    "Paulo WANCHOPE (CRC) 12'" "Miroslav KLOSE (GER) 17'" "Miroslav KLOSE (GER) 61'" "Paulo WANCHOPE (CRC) 73'"
[6] "Torsten FRINGS (GER) 87'"
like image 195
Shane Avatar answered Nov 15 '22 18:11

Shane