Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

a (presumably basic) web scraping of http://www.ssa.gov/cgi-bin/popularnames.cgi in urllib

I am very new to Python (and web scraping). Let me ask you a question.

Many website actually do not report its specific URLs in Firefox or other browsers. For example, Social Security Admin shows popular baby names with ranks (since 1880), but the url does not change when I change the year from 1880 to 1881. It is constantly,

http://www.ssa.gov/cgi-bin/popularnames.cgi

Because I don't know the specific URL, I could not download the webpage using urllib.

In this page source, it includes:

<input type="text" name="year" id="yob" size="4" value="1880">

So presumably, if I can control this "year" value (like, "1881" or "1991"), I can deal with this problem. Am I right? I still don't know how to do it.

Can anybody tell me the solution for this please?

If you know some websites that may help my study, please let me know.

THANKS!

like image 444
Hyun Avatar asked Jun 20 '13 18:06

Hyun


2 Answers

You can still use urllib. The button performs a POST to the current url. Using Firefox's Firebug I took a look at the network traffic and found they're sending 3 parameters: member, top, and year. You can send the same arguments:

import urllib
url = 'http://www.ssa.gov/cgi-bin/popularnames.cgi'

post_params = { # member was blank, so I'm excluding it.
    'top'  : '25',
    'year' : year
    }
post_args = urllib.urlencode(post_params)

Now, just send the url-encoded arguments:

urllib.urlopen(url, post_args)

If you need to send headers as well:

headers = {
    'Accept' : 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
    'Accept-Language' : 'en-US,en;q=0.5',
    'Connection' : 'keep-alive',
    'Host' : 'www.ssa.gov',
    'Referer' : 'http://www.ssa.gov/cgi-bin/popularnames.cgi',
    'User-Agent' : 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0'
    }

# With POST data:
urllib.urlopen(url, post_args, headers)

Execute the code in a loop:

for year in xrange(1880, 2014):
    # The above code...
like image 95
That1Guy Avatar answered Nov 15 '22 09:11

That1Guy


I recommend using Scrapy. It's a very powerful and easy-to-use tool for web-scraping. Why it is worth trying:

  1. Speed/performance/efficiency

    Scrapy is written with Twisted, a popular event-driven networking framework for Python. Thus, it’s implemented using a non-blocking (aka asynchronous) code for concurrency.

  2. Database pipelining

    Scrapy has Item Pipelines feature:

    After an item has been scraped by a spider, it is sent to the Item Pipeline which process it through several components that are executed sequentially.

    So, each page can be written to the database immediately after it has been downloaded.

  3. Code organization

    Scrapy offers you a nice and clear project structure, there you have settings, spiders, items, pipelines etc separated logically. Even that makes your code clearer and easier to support and understand.

  4. Time to code

    Scrapy does a lot of work for you behind the scenes. This makes you focus on the actual code and logic itself and not to think about the "metal" part: creating processes, threads etc.

Yeah, you got it - I love it.

In order to get started:

  • official tutorial
  • newcoder.io tutorial

Hope that helps.

like image 35
alecxe Avatar answered Nov 15 '22 08:11

alecxe