Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to force scrapy to crawl duplicate url?

I am learning Scrapy a web crawling framework.
by default it does not crawl duplicate urls or urls which scrapy have already crawled.

How to make Scrapy to crawl duplicate urls or urls which have already crawled?
I tried to find out on internet but could not find relevant help.

I found DUPEFILTER_CLASS = RFPDupeFilter and SgmlLinkExtractor from Scrapy - Spider crawls duplicate urls but this question is opposite of what I am looking

like image 799
Alok Avatar asked Apr 17 '14 10:04

Alok


2 Answers

You're probably looking for the dont_filter=True argument on Request(). See http://doc.scrapy.org/en/latest/topics/request-response.html#request-objects

like image 91
paul trmbrth Avatar answered Oct 05 '22 23:10

paul trmbrth


A more elegant solution is to disable the duplicate filter altogether:

# settings.py
DUPEFILTER_CLASS = 'scrapy.dupefilters.BaseDupeFilter'

This way you don't have to clutter all your Request creation code with dont_filter=True. Another side effect: this only disables duplicate filtering and not any other filters like offsite filtering.

If you want to use this setting selectively for only one or some of multiple spiders in your project, you can set it via custom_settings in the spider implementation:

class MySpider(scrapy.Spider):
    name = 'myspider'

    custom_settings = {
        'DUPEFILTER_CLASS': 'scrapy.dupefilters.BaseDupeFilter',
    }
like image 37
Done Data Solutions Avatar answered Oct 05 '22 21:10

Done Data Solutions