Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is there a way to reduce Scrapy's memory consumption?

I'm trying to scrape a rather large website (with around 1 million pages) with Scrapy. The spider works fine and it is able to scrape a few thousand pages before inevitably crashing due to low memory.

Things I've tried:

  • Using the -s JOBDIR=<DIRECTORY>: This gave me an initial improvement and I was able to crawl about twice the number of URLs than with the previous approach. However, even with this option Scrapy's memory consumption slowly increases, until it is killed by the out-of-memory killer.
  • Preventing unnecessary functions, such as preventing excessive output by raising the log limit from DEBUG to INFO.
  • Using yield statements instead of returning arrays.
  • Keeping the returned data to an absolute minimum.
  • Running the spider on a beefier machine: This helps me crawl a bit more, but inevitably it crashes again at a later point (and I'm nowhere near the 1 million mark).

Is there something I'm missing which can help me with complete the scraping?

like image 230
user2064000 Avatar asked Aug 19 '17 12:08

user2064000


1 Answers

Don't store any intermediate data. Check if the code going through any infinite loops.

for storing URL's, use any queuing broker like RabbitMq or Redis.

for final data, store in any DB using the python db connection library (sqlalchemy,mysqlconnecter,pyodc etc depending on the db selected)

This can help your code to be run distributed and effienct (remember to use NUllpool or singlepool to avoid too many db connections)

For easy and efficient way is using a sqlite db insert 1 million in a table with status as done or notyet after Crawling and storing the URL data into another data table update the URL table to "done" from "notyet" This helps to keep track of the URLs scraped so for and can restart the script in case any issues and scrape only the not done date.

like image 194
teja chintham Avatar answered Oct 05 '22 07:10

teja chintham