Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Scrapy start_urls

Tags:

python

scrapy

The script (below) from this tutorial contains two start_urls.

from scrapy.spider import Spider
from scrapy.selector import Selector

from dirbot.items import Website

class DmozSpider(Spider):
    name = "dmoz"
    allowed_domains = ["dmoz.org"]
    start_urls = [
        "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
        "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/",
    ]

    def parse(self, response):
        """
        The lines below is a spider contract. For more info see:
        http://doc.scrapy.org/en/latest/topics/contracts.html
        @url http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/
        @scrapes name
        """
        sel = Selector(response)
        sites = sel.xpath('//ul[@class="directory-url"]/li')
        items = []

        for site in sites:
            item = Website()
            item['name'] = site.xpath('a/text()').extract()
            item['url'] = site.xpath('a/@href').extract()
            item['description'] = site.xpath('text()').re('-\s[^\n]*\\r')
            items.append(item)

        return items

But why does it scrape only these 2 web pages? I see allowed_domains = ["dmoz.org"] but these two pages also contain links to other pages which are within dmoz.org domain! Why doesnt it scrape them too?

like image 808
DrStrangeLove Avatar asked Jan 18 '12 00:01

DrStrangeLove


4 Answers

start_urls class attribute contains start urls - nothing more. If you have extracted urls of other pages you want to scrape - yield from parse callback corresponding requests with [another] callback:

class Spider(BaseSpider):

    name = 'my_spider'
    start_urls = [
                'http://www.domain.com/'
    ]
    allowed_domains = ['domain.com']

    def parse(self, response):
        '''Parse main page and extract categories links.'''
        hxs = HtmlXPathSelector(response)
        urls = hxs.select("//*[@id='tSubmenuContent']/a[position()>1]/@href").extract()
        for url in urls:
            url = urlparse.urljoin(response.url, url)
            self.log('Found category url: %s' % url)
            yield Request(url, callback = self.parseCategory)

    def parseCategory(self, response):
        '''Parse category page and extract links of the items.'''
        hxs = HtmlXPathSelector(response)
        links = hxs.select("//*[@id='_list']//td[@class='tListDesc']/a/@href").extract()
        for link in links:
            itemLink = urlparse.urljoin(response.url, link)
            self.log('Found item link: %s' % itemLink, log.DEBUG)
            yield Request(itemLink, callback = self.parseItem)

    def parseItem(self, response):
        ...

If you still want to customize start requests creation, override method BaseSpider.start_requests()

like image 69
warvariuc Avatar answered Sep 22 '22 10:09

warvariuc


start_urls contain those links from which the spider start crawling. If you want crawl recursively you should use crawlspider and define rules for that. http://doc.scrapy.org/en/latest/topics/spiders.html look there for example.

like image 45
Mohit Gupta Avatar answered Sep 23 '22 10:09

Mohit Gupta


The class does not have a rules property. Have a look at http://readthedocs.org/docs/scrapy/en/latest/intro/overview.html and search for "rules" to find an example.

like image 31
Glenn Avatar answered Sep 19 '22 10:09

Glenn


If you use BaseSpider, inside the callback, you have to extract out your desired urls yourself and return a Request object.

If you use CrawlSpider, links extraction would be taken care of by the rules and the SgmlLinkExtractor associated with the rules.

like image 33
goh Avatar answered Sep 22 '22 10:09

goh