Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

recursive crawling with Python and Scrapy

I'm using scrapy to crawl a site. The site has 15 listings per page and then has a next button. I am running into an issue where my Request for the next link is being called before I am finished parsing all of my listings in pipeline. Here is the code for my spider:

class MySpider(CrawlSpider):
    name = 'mysite.com'
    allowed_domains = ['mysite.com']
    start_url = 'http://www.mysite.com/'

    def start_requests(self):
        return [Request(self.start_url, callback=self.parse_listings)]

    def parse_listings(self, response):
        hxs = HtmlXPathSelector(response)
        listings = hxs.select('...')

        for listing in listings:
            il = MySiteLoader(selector=listing)
            il.add_xpath('Title', '...')
            il.add_xpath('Link', '...')

            item = il.load_item()
            listing_url = listing.select('...').extract()

            if listing_url:
                yield Request(urlparse.urljoin(response.url, listing_url[0]),
                              meta={'item': item},
                              callback=self.parse_listing_details)

        next_page_url = hxs.select('descendant::div[@id="pagination"]/'
                                   'div[@class="next-link"]/a/@href').extract()
        if next_page_url:
            yield Request(urlparse.urljoin(response.url, next_page_url[0]),
                          callback=self.parse_listings)


    def parse_listing_details(self, response):
        hxs = HtmlXPathSelector(response)
        item = response.request.meta['item']
        details = hxs.select('...')
        il = MySiteLoader(selector=details, item=item)

        il.add_xpath('Posted_on_Date', '...')
        il.add_xpath('Description', '...')
        return il.load_item()

These lines are the problem. Like I said before, they are being executed before the spider has finished crawling the current page. On every page of the site, this causes only 3 out 15 of my listings to be sent to the pipeline.

     if next_page_url:
            yield Request(urlparse.urljoin(response.url, next_page_url[0]),
                          callback=self.parse_listings)

This is my first spider and might be a design flaw on my part, is there a better way to do this?

like image 301
imns Avatar asked Mar 08 '11 02:03

imns


1 Answers

Scrape instead of spider?

Because your original problem requires the repeated navigation of a consecutive and repeated set of content instead of a tree of content of unknown size, use mechanize (http://wwwsearch.sourceforge.net/mechanize/) and beautifulsoup (http://www.crummy.com/software/BeautifulSoup/).

Here's an example of instantiating a browser using mechanize. Also, using the br.follow_link(text="foo") means that, unlike the xpath in your example, the links will still be followed no matter the structure of the elements in the ancestor path. Meaning, if they update their HTML your script breaks. A looser coupling will save you some maintenance. Here is an example:

br = mechanize.Browser()
br.set_handle_equiv(True)
br.set_handle_redirect(True)
br.set_handle_referer(True)
br.set_handle_robots(False)
br.addheaders = [('User-agent', 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:9.0.1)Gecko/20100101 Firefox/9.0.1')]
br.addheaders = [('Accept-Language','en-US')]
br.addheaders = [('Accept-Encoding','gzip, deflate')]
cj = cookielib.LWPCookieJar()
br.set_cookiejar(cj)
br.open("http://amazon.com")
br.follow_link(text="Today's Deals")
print br.response().read()

Also, in the "next 15" href there is probably something indicating pagination e.g. &index=15. If the total number of items on all pages is available on the first page, then:

soup = BeautifulSoup(br.response().read())
totalItems = soup.findAll(id="results-count-total")[0].text
startVar =  [x for x in range(int(totalItems)) if x % 15 == 0]

Then just iterate over startVar and create the url, add the value of startVar to the url, br.open() it and scrape the data. That way you don't have to programatically "find" the "next" link on the page and execute a click on it to advance to the next page - you already know all the valid urls. Minimizing code driven manipulation of the page to only the data you need will speed up your extraction.

like image 179
Tony Avatar answered Oct 03 '22 23:10

Tony