I am trying to scrape a large amount of web pages to later analyse them. Since the number of URLs is huge, I had decided to use the parallel
package along with XML
.
Specifically, I am using the htmlParse()
function from XML
, which works fine when used with sapply
, but generates empty objects of class HTMLInternalDocument when used with parSapply
.
url1<- "http://forums.philosophyforums.com/threads/senses-of-truth-63636.html"
url2<- "http://forums.philosophyforums.com/threads/the-limits-of-my-language-impossibly-mean-the-limits-of-my-world-62183.html"
url3<- "http://forums.philosophyforums.com/threads/how-language-models-reality-63487.html"
myFunction<- function(x){
cl<- makeCluster(getOption("cl.cores",detectCores()))
ok<- parSapply(cl=cl,X=x,FUN=htmlParse)
return(ok)
}
urls<- c(url1,url2,url3)
#Works
output1<- sapply(urls,function(x)htmlParse(x))
str(output1[[1]])
> Classes 'HTMLInternalDocument', 'HTMLInternalDocument', 'XMLInternalDocument', 'XMLAbstractDocument', 'oldClass' <externalptr>
output1[[1]]
#Doesn't work
myFunction<- function(x){
cl<- makeCluster(getOption("cl.cores",detectCores()))
ok<- parSapply(cl=cl,X=x,FUN=htmlParse)
stopCluster(cl)
return(ok)
}
output2<- myFunction(urls)
str(output2[[1]])
> Classes 'HTMLInternalDocument', 'HTMLInternalDocument', 'XMLInternalDocument', 'XMLAbstractDocument', 'oldClass' <externalptr>
output2[[1]]
#empty
Thanks.
Among all these languages, R is considered as one of the programming languages for Web Scraping because of features like – a rich library, easy to use, dynamically typed, etc. The commonly used web Scraping tools for R is rvest.
You can use getURIAsynchronous
from Rcurl package that allows the caller to specify multiple URIs to download at the same time.
library(RCurl)
library(XML)
get.asynch <- function(urls){
txt <- getURIAsynchronous(urls)
## this part can be easily parallelized
## I am juste using lapply here as first attempt
res <- lapply(txt,function(x){
doc <- htmlParse(x,asText=TRUE)
xpathSApply(doc,"/html/body/h2[2]",xmlValue)
})
}
get.synch <- function(urls){
lapply(urls,function(x){
doc <- htmlParse(x)
res2 <- xpathSApply(doc,"/html/body/h2[2]",xmlValue)
res2
})}
Here some benchmarking for 100 urls you divide the parsing time by a factor of 2.
library(microbenchmark)
uris = c("http://www.omegahat.org/RCurl/index.html")
urls <- replicate(100,uris)
microbenchmark(get.asynch(urls),get.synch(urls),times=1)
Unit: seconds
expr min lq median uq max neval
get.asynch(urls) 22.53783 22.53783 22.53783 22.53783 22.53783 1
get.synch(urls) 39.50615 39.50615 39.50615 39.50615 39.50615 1
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With