Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Scraping Wikipedia tables with Python selectively

I have troubles sorting a wiki table and hope someone who has done it before can give me advice. From the List_of_current_heads_of_state_and_government I need countries (works with the code below) and then only the first mention of Head of state + their names. I am not sure how to isolate the first mention as they all come in one cell. And my attempt to pull their names gives me this error: IndexError: list index out of range. Will appreciate your help!

import requests
from bs4 import BeautifulSoup

wiki = "https://en.wikipedia.org/wiki/List_of_current_heads_of_state_and_government"
website_url = requests.get(wiki).text
soup = BeautifulSoup(website_url,'lxml')

my_table = soup.find('table',{'class':'wikitable plainrowheaders'})
#print(my_table)

states = []
titles = []
names = []
for row in my_table.find_all('tr')[1:]:
    state_cell = row.find_all('a')[0]  
    states.append(state_cell.text)
print(states)
for row in my_table.find_all('td'):
    title_cell = row.find_all('a')[0]
    titles.append(title_cell.text)
print(titles)
for row in my_table.find_all('td'):
    name_cell = row.find_all('a')[1]
    names.append(name_cell.text)
print(names)

Desirable output would be a pandas df:

State | Title | Name |
like image 780
aviss Avatar asked May 15 '18 16:05

aviss


People also ask

Is scraping Wikipedia legal?

The scraping itself is legal, sure. All Wikipedia text is available under the Creative Commons Attribution-ShareAlike License (CC-BY-SA). So long as any reuse follows the terms of that license, that reuse is also legal.


2 Answers

I found a super easy and short way to do this, by importing the wikipedia python module and then using pandas' read_html to put it into a dataframe.

From there you can apply any amount of analysis you wish.

import pandas as pd
import wikipedia as wp
html = wp.page("List_of_video_games_considered_the_best").html().encode("UTF-8")
try: 
    df = pd.read_html(html)[1]  # Try 2nd table first as most pages contain contents table first
except IndexError:
    df = pd.read_html(html)[0]
print(df.to_string())

OR if you would like to call it from the command line:

Simply call by python yourfile.py -p Wikipedia_Page_Article_Here

import pandas as pd
import argparse
import wikipedia as wp
parser = argparse.ArgumentParser()
parser.add_argument("-p", "--wiki_page", help="Give a wiki page to get table", required=True)
args = parser.parse_args()
html = wp.page(args.wiki_page).html().encode("UTF-8")
try: 
    df = pd.read_html(html)[1]  # Try 2nd table first as most pages contain contents table first
except IndexError:
    df = pd.read_html(html)[0]
print(df.to_string())

Hope this helps someone out there!

like image 149
rup Avatar answered Sep 18 '22 15:09

rup


If I could understand your question then the following should get you there:

import requests
from bs4 import BeautifulSoup

URL = "https://en.wikipedia.org/wiki/List_of_current_heads_of_state_and_government"

res = requests.get(URL).text
soup = BeautifulSoup(res,'lxml')
for items in soup.find('table', class_='wikitable').find_all('tr')[1::1]:
    data = items.find_all(['th','td'])
    try:
        country = data[0].a.text
        title = data[1].a.text
        name = data[1].a.find_next_sibling().text
    except IndexError:pass
    print("{}|{}|{}".format(country,title,name))

Output:

Afghanistan|President|Ashraf Ghani
Albania|President|Ilir Meta
Algeria|President|Abdelaziz Bouteflika
Andorra|Episcopal Co-Prince|Joan Enric Vives Sicília
Angola|President|João Lourenço
Antigua and Barbuda|Queen|Elizabeth II
Argentina|President|Mauricio Macri

And so on ----

like image 22
SIM Avatar answered Sep 18 '22 15:09

SIM