Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

R - How to make a click on webpage using rvest or rcurl

Tags:

I want to download data from this webpage

The data can be easily scraped with rvest.

The code maybe like this :

library(rvest)
library(pipeR)
url <- "http://www.tradingeconomics.com/"
css <-     "#ctl00_ContentPlaceHolder1_defaultUC1_CurrencyMatrixAllCountries1_GridView1"

data <- url %>>%
  html() %>>%
  html_nodes(css) %>>%
  html_table() 

But there is a problem for webpages like this.

There is a + button to show the data of all the countries, but the default is just data of 50 countries.

So if I use the code, I can just scrape data of 50 countries.

The + button is made in javascript, so I want to know if there is a way in R to click the button and then scrape the data.

like image 890
yan zhuang Avatar asked Mar 21 '15 17:03

yan zhuang


People also ask

How do I web scrape a website in R?

In general, web scraping in R (or in any other language) boils down to the following three steps: Get the HTML for the web page that you want to scrape. Decide what part of the page you want to read and find out what HTML/CSS you need to select it. Select the HTML and analyze it in the way you need.

What is the purpose of Rvest package in R?

Overview. rvest helps you scrape (or harvest) data from web pages. It is designed to work with magrittr to make it easy to express common web scraping tasks, inspired by libraries like beautiful soup and RoboBrowser.

Can you r for web scraping?

There are several web scraping tools out there to perform the task and various languages too, having libraries that support web scraping. Among all these languages, R is considered as one of the programming languages for Web Scraping because of features like – a rich library, easy to use, dynamically typed, etc.


1 Answers

Sometimes it's better to attack the problem at the ajax web-request level. For this site, you can use Chrome's dev tools and watch the requests. To build the table (the whole table, too) it makes a POST to the site with various ajax-y parameters. Just replicate that, do a bit of data-munging of the response and you're good to go:

library(httr)
library(rvest)
library(dplyr)

res <- POST("http://www.tradingeconomics.com/",
            encode="form",
            user_agent("Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.50 Safari/537.36"),
            add_headers(`Referer`="http://www.tradingeconomics.com/",
                        `X-MicrosoftAjax`="Delta=true"),
            body=list(
              `ctl00$AjaxScriptManager1$ScriptManager1`="ctl00$ContentPlaceHolder1$defaultUC1$CurrencyMatrixAllCountries1$UpdatePanel1|ctl00$ContentPlaceHolder1$defaultUC1$CurrencyMatrixAllCountries1$LinkButton1",
              `__EVENTTARGET`="ctl00$ContentPlaceHolder1$defaultUC1$CurrencyMatrixAllCountries1$LinkButton1",
              `srch-term`="",
              `ctl00$ContentPlaceHolder1$defaultUC1$CurrencyMatrixAllCountries1$GridView1$ctl01$DropDownListCountry`="top",
              `ctl00$ContentPlaceHolder1$defaultUC1$CurrencyMatrixAllCountries1$ParameterContinent`="",
              `__ASYNCPOST`="false"))


res_t <- content(res, as="text")
res_h <- paste0(unlist(strsplit(res_t, "\r\n"))[-1], sep="", collapse="\n")

css <- "#ctl00_ContentPlaceHolder1_defaultUC1_CurrencyMatrixAllCountries1_GridView1"

tab <- html(res_h) %>% 
  html_nodes(css) %>%
  html_table() 

tab[[1]]$COUNTRIESWORLDAMERICAEUROPEASIAAUSTRALIAAFRICA

glimpse(tab[[1]]

Another alternative would have been to use RSelenium to go to the page, click the "+" and then scrape the resultant table.

like image 61
hrbrmstr Avatar answered Sep 24 '22 10:09

hrbrmstr