Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

dynamic start_urls in scrapy

I'm using scrapy to crawl multiple pages on a site. The variable start_urls is used to define pages to be crawled. I would initially start with 1st page, thus defining start_urls = [1st page] in the file example_spider.py

Upon getting more info from 1st page, I would determine what are next pages to be crawled, then would assign start_urls accordingly. Hence, I have to overwrite above example_spider.py with changes to start_urls = [1st page, 2nd page, ..., Kth page], then run scrapy crawl again.

Is that the best approach or is there a better way to dynamically assign start_urls using scrapy API without having to overwrite example_splider.py? Thanks.

like image 855
Harry Avatar asked Jan 10 '12 03:01

Harry


1 Answers

start_urls class attribute contains start urls - nothing more. If you have extracted urls of other pages you want to scrape - yield from parse callback corresponding requests with [another] callback:

class Spider(BaseSpider):

    name = 'my_spider'
    start_urls = [
                'http://www.domain.com/'
    ]
    allowed_domains = ['domain.com']

    def parse(self, response):
        '''Parse main page and extract categories links.'''
        hxs = HtmlXPathSelector(response)
        urls = hxs.select("//*[@id='tSubmenuContent']/a[position()>1]/@href").extract()
        for url in urls:
            url = urlparse.urljoin(response.url, url)
            self.log('Found category url: %s' % url)
            yield Request(url, callback = self.parseCategory)

    def parseCategory(self, response):
        '''Parse category page and extract links of the items.'''
        hxs = HtmlXPathSelector(response)
        links = hxs.select("//*[@id='_list']//td[@class='tListDesc']/a/@href").extract()
        for link in links:
            itemLink = urlparse.urljoin(response.url, link)
            self.log('Found item link: %s' % itemLink, log.DEBUG)
            yield Request(itemLink, callback = self.parseItem)

    def parseItem(self, response):
        ...

If you still want to customize start requests creation, override method BaseSpider.start_requests()

like image 167
warvariuc Avatar answered Oct 17 '22 20:10

warvariuc