Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Urllib2 & BeautifulSoup : Nice couple but too slow - urllib3 & threads?

I was looking to find a way to optimize my code when I heard some good things about threads and urllib3. Apparently, people disagree which solution is the best.

The problem with my script below is the execution time: so slow!

Step 1: I fetch this page http://www.cambridgeesol.org/institutions/results.php?region=Afghanistan&type=&BULATS=on

Step 2: I parse the page with BeautifulSoup

Step 3: I put the data in an excel doc

Step 4: I do it again, and again, and again for all the countries in my list (big list) (I am just changing "Afghanistan" in the url to another country)

Here is my code:

ws = wb.add_sheet("BULATS_IA") #We add a new tab in the excel doc
    x = 0 # We need x and y for pulling the data into the excel doc
    y = 0
    Countries_List = ['Afghanistan','Albania','Andorra','Argentina','Armenia','Australia','Austria','Azerbaijan','Bahrain','Bangladesh','Belgium','Belize','Bolivia','Bosnia and Herzegovina','Brazil','Brunei Darussalam','Bulgaria','Cameroon','Canada','Central African Republic','Chile','China','Colombia','Costa Rica','Croatia','Cuba','Cyprus','Czech Republic','Denmark','Dominican Republic','Ecuador','Egypt','Eritrea','Estonia','Ethiopia','Faroe Islands','Fiji','Finland','France','French Polynesia','Georgia','Germany','Gibraltar','Greece','Grenada','Hong Kong','Hungary','Iceland','India','Indonesia','Iran','Iraq','Ireland','Israel','Italy','Jamaica','Japan','Jordan','Kazakhstan','Kenya','Kuwait','Latvia','Lebanon','Libya','Liechtenstein','Lithuania','Luxembourg','Macau','Macedonia','Malaysia','Maldives','Malta','Mexico','Monaco','Montenegro','Morocco','Mozambique','Myanmar (Burma)','Nepal','Netherlands','New Caledonia','New Zealand','Nigeria','Norway','Oman','Pakistan','Palestine','Papua New Guinea','Paraguay','Peru','Philippines','Poland','Portugal','Qatar','Romania','Russia','Saudi Arabia','Serbia','Singapore','Slovakia','Slovenia','South Africa','South Korea','Spain','Sri Lanka','Sweden','Switzerland','Syria','Taiwan','Thailand','Trinadad and Tobago','Tunisia','Turkey','Ukraine','United Arab Emirates','United Kingdom','United States','Uruguay','Uzbekistan','Venezuela','Vietnam']
    Longueur = len(Countries_List)



    for Countries in Countries_List:
        y = 0

        htmlSource = urllib.urlopen("http://www.cambridgeesol.org/institutions/results.php?region=%s&type=&BULATS=on" % (Countries)).read() # I am opening the page with the name of the correspondant country in the url
        s = soup(htmlSource)
        tableGood = s.findAll('table')
        try:
            rows = tableGood[3].findAll('tr')
            for tr in rows:
                cols = tr.findAll('td')
                y = 0
                x = x + 1
                for td in cols:
                    hum =  td.text
                    ws.write(x,y,hum)
                    y = y + 1
                    wb.save("%s.xls" % name_excel)

        except (IndexError):
            pass

So I know that all is not perfect but I am looking forward to learn new things in Python ! The script is very slow because urllib2 is not that fast, and BeautifulSoup. For the soup thing, I guess I can't really make it better, but for urllib2, I don't.

EDIT 1 : Multiprocessing useless with urllib2? Seems to be interesting in my case. What do you guys think about this potential solution ?!

# Make sure that the queue is thread-safe!!

def producer(self):
    # Only need one producer, although you could have multiple
    with fh = open('urllist.txt', 'r'):
        for line in fh:
            self.queue.enqueue(line.strip())

def consumer(self):
    # Fire up N of these babies for some speed
    while True:
        url = self.queue.dequeue()
        dh = urllib2.urlopen(url)
        with fh = open('/dev/null', 'w'): # gotta put it somewhere
            fh.write(dh.read())

EDIT 2: URLLIB3 Can anyone tell me more things about that ?

Re-use the same socket connection for multiple requests (HTTPConnectionPool and HTTPSConnectionPool) (with optional client-side certificate verification). https://github.com/shazow/urllib3

As far as I am requesting 122 times the same website for different pages, I guess reusing the same socket connection can be interesting, am I wrong ? Cant it be faster ? ...

http = urllib3.PoolManager()
r = http.request('GET', 'http://www.bulats.org')
for Pages in Pages_List:
    r = http.request('GET', 'http://www.bulats.org/agents/find-an-agent?field_continent_tid=All&field_country_tid=All&page=%s' % (Pages))
    s = soup(r.data)
like image 413
Carto_ Avatar asked Apr 22 '12 03:04

Carto_


People also ask

What is urllib2?

urllib2 is a Python module that can be used for fetching URLs. It defines functions and classes to help with URL actions (basic and digest. authentication, redirections, cookies, etc) The magic starts with importing the urllib2 module.

Does Python3 have urllib2?

The urllib2 module supported a lot of functions and classes that helped the users to open an URL and extract the contents. However, In Python3, the urllib2 module is not available. Instead, the module is split into several sub-modules like urllib. request , urllib.

What is the urllib2 module in Python?

Urllib package is the URL handling module for python. It is used to fetch URLs (Uniform Resource Locators). It uses the urlopen function and is able to fetch URLs using a variety of different protocols.

Is urllib2 deprecated?

urllib2 is deprecated in python 3. x. use urllib instaed.


2 Answers

I don't think urllib or BeautifulSoup is slow. I run your code in my local machine with a modified version ( removed the excel stuff ). It took around 100ms to open the connection, download the content, parse it , and print it to the console for a country.

10ms is the total amount of time that BeautifulSoup spent to parse the content, and print to the console per country. That is fast enough.

Neither I do believe using Scrappy or Threading is going to solve the problem. Because the problem is the expectation that it is going to be fast.

Welcome to the world of HTTP. It is going to be slow sometimes, sometimes it will be very fast. Couple of slow connection reasons

  • because of the server handling your request( return 404 sometimes )
  • DNS resolve ,
  • HTTP handshake,
  • your ISP's connection stability,
  • your bandwidth rate,
  • packet loss rate

etc..

Don't forget, you are trying to make 121 HTTP Requests to a server consequently and you don't know what kind of servers do they have. They might also ban your IP address because of consequent calls.

Take a look at Requests lib. Read their documentation. If you're doing this to learn Python more, don't jump into Scrapy directly.

like image 34
Bahadir Cambel Avatar answered Sep 19 '22 02:09

Bahadir Cambel


Consider using something like workerpool. Referring to the Mass Downloader example, combined with urllib3 would look something like:

import workerpool
import urllib3

URL_LIST = [] # Fill this from somewhere

NUM_SOCKETS = 3
NUM_WORKERS = 5

# We want a few more workers than sockets so that they have extra
# time to parse things and such.

http = urllib3.PoolManager(maxsize=NUM_SOCKETS)
workers = workerpool.WorkerPool(size=NUM_WORKERS)

class MyJob(workerpool.Job):
    def __init__(self, url):
       self.url = url

    def run(self):
        r = http.request('GET', self.url)
        # ... do parsing stuff here


for url in URL_LIST:
    workers.put(MyJob(url))

# Send shutdown jobs to all threads, and wait until all the jobs have been completed
# (If you don't do this, the script might hang due to a rogue undead thread.)
workers.shutdown()
workers.wait()

You may note from the Mass Downloader examples that there are multiple ways of doing this. I chose this particular example just because it's less magical, but any of the other strategies are valid also.

Disclaimer: I am the author of both, urllib3 and workerpool.

like image 50
shazow Avatar answered Sep 18 '22 02:09

shazow