Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

R: Increase my rvest scraper's speed?

I just started scraping with R's rvest library. All too bold, I started with a query for 3206 sub-pages, each for which I want to extract a string.

string to srape

The problem is:

Duration.

My question is:

Can I optimize my script (below) to speed up its execution?

Background:

Looping just for 3 integers works fine, but my script for all has now been running for a long time. I don't know any Python (which I might change since I heard that there's a thing called aiohttp). If there is no other way, I'd be grateful, if someone could provide the link to a good tutorial or an alternative solution.

Script

library(rvest)
library(data.table)
#READ IN HTML
#Link: http://globalenergyobservatory.org/list.php?db=PowerPlants&type=Hydro
hydro <- read_html("http://globalenergyobservatory.org/list.php?db=PowerPlants&type=Hydro")
#GET ATTRIBUTES
attributes <- hydro %>%
  html_nodes("#list a") %>%
  html_attrs()
#WRITE URLs
urls = list()
for (i in 1:3206) {
  da <- unlist(attributes[i])
  dat <- da[1]
  data <- paste("http://www.globalenergyobservatory.org/",dat, sep="")
  urls[[i]] <- data
}
#GET ABSTRACTS
abstracts = list()
for(i in 1:3206) {
  to_use <- read_html(urls[[i]])
  to_use %>%
    html_nodes("#Abstract_Block td") %>%
    html_text() -> to_write
  abstracts[[i]] <- to_write
}
like image 837
Naibaf Avatar asked Apr 06 '17 18:04

Naibaf


1 Answers

All great comments. I would also suggest you to do the same.

library(rvest)
library(data.table)
#READ IN HTML
#Link: http://globalenergyobservatory.org/list.php?db=PowerPlants&type=Hydro
hydro <- read_html("http://globalenergyobservatory.org/list.php?db=PowerPlants&type=Hydro")
#GET ATTRIBUTES
attributes <- paste0("http://www.globalenergyobservatory.org/",
                     unlist(hydro 
                            %>% html_nodes("#list a") 
                            %>% html_attrs())[seq_along(unlist(hydro 
                                                             %>% html_nodes("#list a") 
# YOUR METHOD                                                             %>% html_attrs())) %% 2 > 0])
time = proc.time()
abstracts <- 0
for(i in 1:100) {
  page<-html_session(attributes[i])
  abstracts[i]<-html_nodes(read_html(page),css="#Abstract_Block td") %>% html_text()
}
print(proc.time()-time)


# PROPOSED METHOD
time = proc.time()
library(doSNOW)
library(foreach)
cluster = makeCluster(2, type = "SOCK")
registerDoSNOW(cluster)
abstracts<-function(attributes){
  library(rvest)
  page<-html_session(attributes)
  abstracts<-html_nodes(read_html(page),css="#Abstract_Block td") %>% html_text()
  return(abstracts)
}
big_list<-unlist(foreach(i = 1:100) %dopar% abstracts(attributes[i]))
print(proc.time()-time)
stopCluster(cluster)

For your method the output is as follows

user  system elapsed 
6.01    0.31   61.48

For my method

user  system elapsed 
0.26    0.08   16.33 

reduced the computing time around 75%

like image 81
Bharath Avatar answered Oct 15 '22 03:10

Bharath