Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to crawl an entire website with Scrapy?

I'm unable to crawl a whole website, Scrapy just crawls at the surface, I want to crawl deeper. Been googling for the last 5-6 hours and no help. My code below:

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.item import Item
from scrapy.spider import BaseSpider
from scrapy import log

class ExampleSpider(CrawlSpider):
    name = "example.com"
    allowed_domains = ["example.com"]
    start_urls = ["http://www.example.com/"]
    rules = [Rule(SgmlLinkExtractor(allow=()), 
                  follow=True),
             Rule(SgmlLinkExtractor(allow=()), callback='parse_item')
    ]
    def parse_item(self,response):
        self.log('A response from %s just arrived!' % response.url)
like image 679
Abhi Avatar asked Mar 19 '13 13:03

Abhi


1 Answers

Rules short-circuit, meaning that the first rule a link satisfies will be the rule that gets applied, your second Rule (with callback) will not be called.

Change your rules to this:

rules = [Rule(SgmlLinkExtractor(), callback='parse_item', follow=True)]
like image 128
Steven Almeroth Avatar answered Oct 30 '22 05:10

Steven Almeroth