Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Does a multithreaded crawler in Python really speed things up?

Was looking to write a little web crawler in python. I was starting to investigate writing it as a multithreaded script, one pool of threads downloading and one pool processing results. Due to the GIL would it actually do simultaneous downloading? How does the GIL affect a web crawler? Would each thread pick some data off the socket, then move on to the next thread, let it pick some data off the socket, etc..?

Basically I'm asking is doing a multi-threaded crawler in python really going to buy me much performance vs single threaded?

thanks!

like image 956
James Avatar asked May 13 '10 23:05

James


4 Answers

The GIL is not held by the Python interpreter when doing network operations. If you are doing work that is network-bound (like a crawler), you can safely ignore the effects of the GIL.

On the other hand, you may want to measure your performance if you create lots of threads doing processing (after downloading). Limiting the number of threads there will reduce the effects of the GIL on your performance.

like image 83
Greg Hewgill Avatar answered Sep 19 '22 11:09

Greg Hewgill


Look at how scrapy works. It can help you a lot. It doesn't use threads, but can do multiple "simultaneous" downloading, all in the same thread.

If you think about it, you have only a single network card, so parallel processing can't really help by definition.

What scrapy does is just not wait around for the response of one request before sending another. All in a single thread.

like image 30
nosklo Avatar answered Sep 20 '22 11:09

nosklo


When it comes to crawling you might be better off using something event-based such as Twisted that uses non-blocking asynchronous socket operations to fetch and return data as it comes, rather than blocking on each one.

Asynchronous network operations can easily be and usually are single-threaded. Network I/O almost always has higher latency than that of CPU because you really have no idea how long a page is going to take to return, and this is where async shines because an async operation is much lighter weight than a thread.

Edit: Here is a simple example of how to use Twisted's getPage to create a simple web crawler.

like image 41
jathanism Avatar answered Sep 20 '22 11:09

jathanism


Another consideration: if you're scraping a single website and the server places limits on the frequency of requests your can send from your IP address, adding multiple threads may make no difference.

like image 39
twneale Avatar answered Sep 21 '22 11:09

twneale