Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do I stop all spiders and the engine immediately after a condition in a pipeline is met?

We have a system written with scrapy to crawl a few websites. There are several spiders, and a few cascaded pipelines for all items passed by all crawlers. One of the pipeline components queries the google servers for geocoding addresses. Google imposes a limit of 2500 requests per day per IP address, and threatens to ban an IP address if it continues querying google even after google has responded with a warning message: 'OVER_QUERY_LIMIT'.

Hence I want to know about any mechanism which I can invoke from within the pipeline that will completely and immediately stop all further crawling/processing of all spiders and also the main engine.

I have checked other similar questions and their answers have not worked:

  • Force my scrapy spider to stop crawling
from scrapy.project import crawler
crawler._signal_shutdown(9,0) #Run this if the cnxn fails.

this does not work as it takes time for the spider to stop execution and hence many more requests are made to google (which could potentially ban my IP address)

import sys
sys.exit("SHUT DOWN EVERYTHING!")

this one doesn't work at all; items keep getting generated and passed to the pipeline, although the log vomits sys.exit() -> exceptions.SystemExit raised (to no effect)

  • How can I make scrapy crawl break and exit when encountering the first exception?
crawler.engine.close_spider(self, 'log message')

this one has the same problem as the first case mentioned above.

I tried:

scrapy.project.crawler.engine.stop()

To no avail

EDIT: If I do in the pipeline:

from scrapy.contrib.closespider import CloseSpider

what should I pass as the 'crawler' argument to the CloseSpider's init() from the scope of my pipeline?

like image 834
aniketd Avatar asked Mar 14 '12 09:03

aniketd


1 Answers

You can raise a CloseSpider exception to close down a spider. However, I don't think this will work from a pipeline.

EDIT: avaleske notes in the comments to this answer that he was able to raise a CloseSpider exception from a pipeline. Most wise would be to use this.

A similar situation has been described on the Scrapy Users group, in this thread.

I quote:

To close an spider for any part of your code you should use engine.close_spider method. See this extension for an usage example: https://github.com/scrapy/scrapy/blob/master/scrapy/contrib/closespider.py#L61

You could write your own extension, whilst looking at closespider.py as an example, which will shut down a spider if a certain condition has been met.

Another "hack" would be to set a flag on the spider in the pipeline. For example:

pipeline:

def process_item(self, item, spider):
    if some_flag:
        spider.close_down = True

spider:

def parse(self, response):
    if self.close_down:
        raise CloseSpider(reason='API usage exceeded')
like image 161
Sjaak Trekhaak Avatar answered Oct 05 '22 23:10

Sjaak Trekhaak