Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Web scraping the IIS based website

Tags:

r

web-scraping

I am using R to webscrape a table from this site.

I am using library rvest.

#install.packages("rvest", dependencies = TRUE) 
library(rvest) 
OPMpage <- read_html("https://www.opm.gov/policy-data-oversight/data-analysis-documentation/federal-employment-reports/historical-tables/total-government-employment-since-1962/")

I receive this error:

Error in open.connection(x, "rb") : HTTP error 403.

What am I doing wrong?

like image 422
Feyzi Bagirov Avatar asked Feb 29 '16 02:02

Feyzi Bagirov


1 Answers

It's forbidding you from accessing the page because you have NULL in the user-agent string of your headers. (Normally it's a string telling what browser you're using, though some browsers let users spoof other browsers.) Using the httr package, you can set a user-agent string:

library(httr)
library(rvest)

url <- "https://www.opm.gov/policy-data-oversight/data-analysis-documentation/federal-employment-reports/historical-tables/total-government-employment-since-1962/"

x <- GET(url, add_headers('user-agent' = 'Gov employment data scraper ([[your email]])'))

Wrapped in a GET request, add_headers lets you set whatever parameters you like. You could also use the more specific user_agent function in place of add_headers, if that's all you want to set.

In this case any user-agent string will work, but it's polite (see the link at the end) to say who you are and what you want.

Now you can use rvest to parse the HTML and pull out the table. You'll need a way to select the relevant table; looking at the HTML, I saw it had class = "DataTable", but you can also use the SelectorGadget (see the rvest vignettes) to find a valid CSS or XPath selector. Thus

x %>% 
    read_html() %>% 
    html_node('.DataTable') %>% 
    html_table()

gives you a nice (if not totally clean) data.frame.

Note: Scrape responsibly and legally. Given that OPM is a government source, it's in the public domain, but that's not the case with a lot of the web. Always read any terms of service, plus this nice post on how to scrape responsibly.

like image 80
alistaire Avatar answered Nov 04 '22 05:11

alistaire