Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Multiprocessing useless with urllib2?

I recently tried to speed up a little tool (which uses urllib2 to send a request to the (unofficial)twitter-button-count-url (> 2000 urls) and parses it´s results) with the multiprocessing module (and it´s worker pools). I read several discussion here about multithreading (which slowed the whole thing down compared to a standard, non-threaded version) and multiprocessing, but i could´t find an answer to a (probably very simple) question:

Can you speed up url-calls with multiprocessing or ain´t the bottleneck something like the network-adapter? I don´t see which part of, for example, the urllib2-open-method could be parallelized and how that should work...

EDIT: THis is the request i want to speed up and the current multiprocessing-setup:

 urls=["www.foo.bar", "www.bar.foo",...]
 tw_url='http://urls.api.twitter.com/1/urls/count.json?url=%s'

 def getTweets(self,urls):
    for i in urls:
        try:
            self.tw_que=urllib2.urlopen(tw_url %(i))
            self.jsons=json.loads(self.tw_que.read())
            self.tweets.append({'url':i,'date':today,'tweets':self.jsons['count']})
        except ValueError:
            print ....
            continue
    return self.tweets 

 if __name__ == '__main__':
    pool = multiprocessing.Pool(processes=4)            
    result = [pool.apply_async(getTweets(i,)) for i in urls]
    [i.get() for i in result]
like image 517
dorvak Avatar asked Aug 01 '11 23:08

dorvak


People also ask

What is the multiprocessing module used for?

Due to this, the multiprocessing module allows the programmer to fully leverage multiple processors on a given machine. It runs on both Unix and Windows. The multiprocessing module also introduces APIs which do not have analogs in the threading module.

What is multiprocessing in Python?

Python provides a multiprocessing module that includes an API, similar to the threading module, to divide the program into multiple processes. Let us see an example, print("END!")

What is the multiprocessing package in Linux?

The multiprocessing package offers both local and remote concurrency, effectively side-stepping the Global Interpreter Lock by using subprocesses instead of threads. Due to this, the multiprocessing module allows the programmer to fully leverage multiple processors on a given machine.

How to demonstrate the use of multiprocessing in a realistic setting?

To demonstrate the use of multiprocessing in a somewhat realistic setting we will continue with the images example used in the multithreading article, and perform some image augmentation on the images we downloaded from Pexels.


3 Answers

Ah here comes yet another discussion about the GIL. Well here's the thing. Fetching content with urllib2 is going to be mostly IO-bound. Native threading AND multiprocessing will both have the same performance when the task is IO-bound (threading only becomes a problem when it's CPU-bound). Yes you can speed it up, I've done it myself using python threads and something like 10 downloader threads.

Basically you use a producer-consumer model with one thread (or process) producing urls to download, and N threads (or processes) consuming from that queue and making requests to the server.

Here's some pseudo-code:

# Make sure that the queue is thread-safe!!

def producer(self):
    # Only need one producer, although you could have multiple
    with fh = open('urllist.txt', 'r'):
        for line in fh:
            self.queue.enqueue(line.strip())

def consumer(self):
    # Fire up N of these babies for some speed
    while True:
        url = self.queue.dequeue()
        dh = urllib2.urlopen(url)
        with fh = open('/dev/null', 'w'): # gotta put it somewhere
            fh.write(dh.read())

Now if you're downloading very large chunks of data (hundreds of MB) and a single request completely saturates the bandwidth, then yes running multiple downloads is pointless. The reason you run multiple downloads (generally) is because requests are small and have a relatively high latency / overhead.

like image 135
Chris Eberle Avatar answered Sep 28 '22 15:09

Chris Eberle


Take a look at a look at gevent and specifically at this example: concurrent_download.py. It will be reasonably faster than multiprocessing and multithreading + it can handle thousands of connections easily.

like image 25
Denis Avatar answered Sep 28 '22 17:09

Denis


It depends! Are you contacting different servers, are the transferred files small or big, do you loose much of the time waiting for the server to reply or by transferring data,...

Generally, multiprocessing involves some overhead and as such you want to be sure that the speedup gained by parallelizing the work is larger than the overhead itself.

Another point: network and thus I/O bound applications work – and scale – better with asynchronous I/O and an event driven architecture instead of threading or multiprocessing, as in such applications much of the time is spent waiting on I/O and not doing any computation.

For your specific problem, I would try to implement a solution by using Twisted, gevent, Tornado or any other networking framework which does not use threads to parallelize connections.

like image 21
GaretJax Avatar answered Sep 28 '22 16:09

GaretJax