Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Scraping wunderground without API, using python

I'm not very experienced in the world of scraping data, so the problem here may be obvious to some.

What I want is to scrape historical daily weather data from wunderground.com, without paying the API. Maybe it's not possible at all.

My method is simply to use requests.get and save the whole text into a file (code below).

Instead of getting the tables that can be accessed from the web browser (see image below), the result is a file that has almost everything but those tables. Something like this:

Summary
No data recorded
Daily Observations
No Data Recorded

What is weird is that if I save-as the web page with Firefox, the result depends on whether I choose 'web-page, only HTML' or 'web-page, complete': the latter includes the data I'm interested in, the former does not.

Is it possible that this is on purpose so nobody scrapes their data? I just wanted to make sure there is not a workaround this problem.

Thanks in advance, Juan

Note: I tried using the user-agent field to no avail.

# Note: I run > set PYTHONIOENCODING=utf-8 before executing python
import requests

# URL with wunderground weather information for a specific date:
date = '2019-03-12'
url = 'https://www.wunderground.com/history/daily/sd/khartoum/HSSS/date/' + date
r = requests.get(url)

# Write a file to check if the tables ar being retrieved:
with open('test.html', 'wb') as testfile:
    testfile.write(r.text.encode('utf-8'))

Screenshot of the tables I want to scrape.


UPDATE: FOUND A SOLUTION

Thanks to pointing me to the selenium module, it is the exact solution I needed. The code extracts all the tables present on the URL of a given date (as seen when visiting the site normally). It needs modifications in order to be able to scrape over a list of dates and organize the CSV files created.

Note: geckodriver.exe is needed in the working directory.

from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
from selenium.webdriver.common.keys import Keys
import requests, sys, re

# URL with wunderground weather information
url = 'https://www.wunderground.com/history/daily/sd/khartoum/HSSS/date/2019-3-12'

# Commands related to the webdriver (not sure what they do, but I can guess):
bi = FirefoxBinary(r'C:\Program Files (x86)\Mozilla Firefox\\firefox.exe')
br = webdriver.Firefox(firefox_binary=bi)

# This starts an instance of Firefox at the specified URL:
br.get(url)

# I understand that at this point the data is in html format and can be
# extracted with BeautifulSoup:
sopa = BeautifulSoup(br.page_source, 'lxml')

# Close the firefox instance started before:
br.quit()

# I'm only interested in the tables contained on the page:
tablas = sopa.find_all('table')

# Write all the tables into csv files:
for i in range(len(tablas)):
    out_file = open('wunderground' + str(i + 1) + '.csv', 'w')
    tabla = tablas[i]

    # ---- Write the table header: ----
    table_head = tabla.findAll('th')
    output_head = []
    for head in table_head:
        output_head.append(head.text.strip())

    # Some cleaning and formatting of the text before writing:
    encabezado = '"' + '";"'.join(output_head) + '"'
    encabezado = re.sub('\s', '', encabezado) + '\n'
    out_file.write(encabezado.encode(encoding='UTF-8'))

    # ---- Write the rows: ----
    output_rows = []
    filas = tabla.findAll('tr')
    for j in range(1, len(filas)):
        table_row = filas[j]
        columns = table_row.findAll('td')
        output_row = []
        for column in columns:
            output_row.append(column.text.strip())

        # Some cleaning and formatting of the text before writing:
        fila = '"' + '";"'.join(output_row) + '"'
        fila = re.sub('\s', '', fila) + '\n'
        out_file.write(fila.encode(encoding='UTF-8'))

    out_file.close()

Extra: the answer of @QHarr works beautifully, but I needed a couple of modifications to use it, because I use firefox in my PC. It's important to note that for this to work I had to add the geckodriver.exe file into my working directory. Here's the code:

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
from selenium.webdriver.support import expected_conditions as EC
import pandas as pd

url = 'https://www.wunderground.com/history/daily/sd/khartoum/HSSS/date/2019-03-12'
bi = FirefoxBinary(r'C:\Program Files (x86)\Mozilla Firefox\\firefox.exe')
driver = webdriver.Firefox(firefox_binary=bi)
# driver = webdriver.Chrome()
driver.get(url)
tables = WebDriverWait(driver,20).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "table")))
for table in tables:
    newTable = pd.read_html(table.get_attribute('outerHTML'))
    if newTable:
        print(newTable[0].fillna(''))
like image 954
Juan Avatar asked Feb 03 '23 19:02

Juan


2 Answers

you could use selenium to ensure page load then pandas read_html to get tables

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import pandas as pd

url = 'https://www.wunderground.com/history/daily/sd/khartoum/HSSS/date/2019-03-12'
driver = webdriver.Chrome()
driver.get(url)
tables = WebDriverWait(driver,20).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "table")))
for table in tables:
    newTable = pd.read_html(table.get_attribute('outerHTML'))
    if newTable:
        print(newTable[0].fillna(''))
like image 163
QHarr Avatar answered Feb 06 '23 16:02

QHarr


They have added some additional tables at the top, just searching with table will not work now, I have used the class selector with the class name to fetch the record, it's working fine

tables = WebDriverWait(driver,20).until(EC.presence_of_all_elements_located((By.CLASS_NAME, "mat-table.cdk-table.mat-sort.ng-star-inserted")))
like image 23
moulindra goswami Avatar answered Feb 06 '23 15:02

moulindra goswami