Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Following links, Scrapy web crawler framework

After several readings to Scrapy docs I'm still not catching the diferrence between using CrawlSpider rules and implementing my own link extraction mechanism on the callback method.

I'm about to write a new web crawler using the latter approach, but just becuase I had a bad experience in a past project using rules. I'd really like to know exactly what I'm doing and why.

Anyone familiar with this tool?

Thanks for your help!

like image 493
romeroqj Avatar asked Feb 02 '23 16:02

romeroqj


2 Answers

CrawlSpider inherits BaseSpider. It just added rules to extract and follow links. If these rules are not enough flexible for you - use BaseSpider:

class USpider(BaseSpider):
    """my spider. """

    start_urls = ['http://www.amazon.com/s/?url=search-alias%3Dapparel&sort=relevance-fs-browse-rank']
    allowed_domains = ['amazon.com']

    def parse(self, response):
        '''Parse main category search page and extract subcategory search link.'''
        self.log('Downloaded category search page.', log.DEBUG)
        if response.meta['depth'] > 5:
            self.log('Categories depth limit reached (recursive links?). Stopping further following.', log.WARNING)

        hxs = HtmlXPathSelector(response)
        subcategories = hxs.select("//div[@id='refinements']/*[starts-with(.,'Department')]/following-sibling::ul[1]/li/a[span[@class='refinementLink']]/@href").extract()
        for subcategory in subcategories:
            subcategorySearchLink = urlparse.urljoin(response.url, subcategorySearchLink)
            yield Request(subcategorySearchLink, callback = self.parseSubcategory)

    def parseSubcategory(self, response):
        '''Parse subcategory search page and extract item links.'''
        hxs = HtmlXPathSelector(response)

        for itemLink in hxs.select('//a[@class="title"]/@href').extract():
            itemLink = urlparse.urljoin(response.url, itemLink)
            self.log('Requesting item page: ' + itemLink, log.DEBUG)
            yield Request(itemLink, callback = self.parseItem)

        try:
            nextPageLink = hxs.select("//a[@id='pagnNextLink']/@href").extract()[0]
            nextPageLink = urlparse.urljoin(response.url, nextPageLink)
            self.log('\nGoing to next search page: ' + nextPageLink + '\n', log.DEBUG)
            yield Request(nextPageLink, callback = self.parseSubcategory)
        except:
            self.log('Whole category parsed: ' + categoryPath, log.DEBUG)

    def parseItem(self, response):
        '''Parse item page and extract product info.'''

        hxs = HtmlXPathSelector(response)
        item = UItem()

        item['brand'] = self.extractText("//div[@class='buying']/span[1]/a[1]", hxs)
        item['title'] = self.extractText("//span[@id='btAsinTitle']", hxs)
        ...

Even if BaseSpider's start_urls are not enough flexible for you, override start_requests method.

like image 96
warvariuc Avatar answered Feb 05 '23 04:02

warvariuc


If you want selective crawling, like fetching "Next" links for pagination etc., it's better to write your own crawler. But for general crawling, you should use crawlspider and filter out the links that you don't need to follow using Rules & process_links function.

Take a look at the crawlspider code in \scrapy\contrib\spiders\crawl.py , it isn't too complicated.

like image 20
user Avatar answered Feb 05 '23 06:02

user