Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Scrapy set depth limit per allowed_domains

I am crawling 6 different allowed_domains and would like to limit the depth of 1 domain. How would I go about limiting the depth of that 1 domain in scrapy? Or would it be possible to crawl only 1 depth of an offsite domains?

like image 561
E liquid Vape Avatar asked Jan 06 '15 19:01

E liquid Vape


1 Answers

Scrapy doesn't provide anything like this. You can set the DEPTH_LIMIT per-spider, but not per-domain.

What can we do? Read the code, drink coffee and solve it (order is important).

The idea is to disable Scrapy's built-in DepthMiddleware and provide our custom one instead.

First, let's define settings:

  • DOMAIN_DEPTHS would be a dictionary with depth limits per domain
  • DEPTH_LIMIT setting we'll leave as a default one in case a domain is not configured

Example settings:

DOMAIN_DEPTHS = {'amazon.com': 1, 'homedepot.com': 4}
DEPTH_LIMIT = 3

Okay, now the custom middleware (based on DepthMiddleware):

from scrapy import log
from scrapy.http import Request
import tldextract


class DomainDepthMiddleware(object):
    def __init__(self, domain_depths, default_depth):
        self.domain_depths = domain_depths
        self.default_depth = default_depth

    @classmethod
    def from_crawler(cls, crawler):
        settings = crawler.settings
        domain_depths = settings.getdict('DOMAIN_DEPTHS', default={})
        default_depth = settings.getint('DEPTH_LIMIT', 1)

        return cls(domain_depths, default_depth)

    def process_spider_output(self, response, result, spider):
        def _filter(request):
            if isinstance(request, Request):
                # get max depth per domain
                domain = tldextract.extract(request.url).registered_domain
                maxdepth = self.domain_depths.get(domain, self.default_depth)

                depth = response.meta.get('depth', 0) + 1
                request.meta['depth'] = depth

                if maxdepth and depth > maxdepth:
                    log.msg(format="Ignoring link (depth > %(maxdepth)d): %(requrl)s ",
                            level=log.DEBUG, spider=spider,
                            maxdepth=maxdepth, requrl=request.url)
                    return False
            return True

        return (r for r in result or () if _filter(r))

Note that it requires tldextract module to be installed (used for extracting a domain name from url):

>>> import tldextract
>>> url = 'http://stackoverflow.com/questions/27805952/scrapy-set-depth-limit-per-allowed-domains'
>>> tldextract.extract(url).registered_domain
'stackoverflow.com'

Now we need to turn off the default middleware and use the one we implemented:

SPIDER_MIDDLEWARES = {
    'myproject.middlewares.DomainDepthMiddleware': 900,
    'scrapy.spidermiddlewares.depth.DepthMiddleware': None
}
like image 164
alecxe Avatar answered Nov 15 '22 12:11

alecxe