I'm trying to scrape a rather large website (with around 1 million pages) with Scrapy. The spider works fine and it is able to scrape a few thousand pages before inevitably crashing due to low memory.
Things I've tried:
-s JOBDIR=<DIRECTORY>
: This gave me an initial improvement and I was able to crawl about twice the number of URLs than with the previous approach. However, even with this option Scrapy's memory consumption slowly increases, until it is killed by the out-of-memory killer.Is there something I'm missing which can help me with complete the scraping?
Don't store any intermediate data. Check if the code going through any infinite loops.
for storing URL's, use any queuing broker like RabbitMq or Redis.
for final data, store in any DB using the python db connection library (sqlalchemy,mysqlconnecter,pyodc etc depending on the db selected)
This can help your code to be run distributed and effienct (remember to use NUllpool or singlepool to avoid too many db connections)
For easy and efficient way is using a sqlite db insert 1 million in a table with status as done or notyet after Crawling and storing the URL data into another data table update the URL table to "done" from "notyet" This helps to keep track of the URLs scraped so for and can restart the script in case any issues and scrape only the not done date.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With