Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Scrapy CrawlSpider for AJAX content

I am attempting to crawl a site for news articles. My start_url contains:

(1) links to each article: http://example.com/symbol/TSLA

and

(2) a "More" button that makes an AJAX call that dynamically loads more articles within the same start_url: http://example.com/account/ajax_headlines_content?type=in_focus_articles&page=0&slugs=tsla&is_symbol_page=true

A parameter to the AJAX call is "page", which is incremented each time the "More" button is clicked. For example, clicking "More" once will load an additional n articles and update the page parameter in the "More" button onClick event, so that next time "More" is clicked, "page" two of articles will be loaded (assuming "page" 0 was loaded initially, and "page" 1 was loaded on the first click).

For each "page" I would like to scrape the contents of each article using Rules, but I do not know how many "pages" there are and I do not want to choose some arbitrary m (e.g., 10k). I can't seem to figure out how to set this up.

From this question, Scrapy Crawl URLs in Order, I have tried to create a URL list of potential URLs, but I can't determine how and where to send a new URL from the pool after parsing the previous URL and ensuring it contains news links for a CrawlSpider. My Rules send responses to a parse_items callback, where the article contents are parsed.

Is there a way to observe the contents of the links page (similar to the BaseSpider example) before applying Rules and calling parse_items so that I may know when to stop crawling?

Simplified code (I removed several of the fields I'm parsing for clarity):

class ExampleSite(CrawlSpider):

    name = "so"
    download_delay = 2

    more_pages = True
    current_page = 0

    allowed_domains = ['example.com']

    start_urls = ['http://example.com/account/ajax_headlines_content?type=in_focus_articles&page=0'+
                      '&slugs=tsla&is_symbol_page=true']

    ##could also use
    ##start_urls = ['http://example.com/symbol/tsla']

    ajax_urls = []                                                                                                                                                                                                                                                                                                                                                                                                                          
    for i in range(1,1000):
        ajax_urls.append('http://example.com/account/ajax_headlines_content?type=in_focus_articles&page='+str(i)+
                      '&slugs=tsla&is_symbol_page=true')

    rules = (
             Rule(SgmlLinkExtractor(allow=('/symbol/tsla', ))),
             Rule(SgmlLinkExtractor(allow=('/news-article.*tesla.*', '/article.*tesla.*', )), callback='parse_item')
            )

        ##need something like this??
        ##override parse?
        ## if response.body == 'no results':
            ## self.more_pages = False
            ## ##stop crawler??   
        ## else: 
            ## self.current_page = self.current_page + 1
            ## yield Request(self.ajax_urls[self.current_page], callback=self.parse_start_url)


    def parse_item(self, response):

        self.log("Scraping: %s" % response.url, level=log.INFO)

        hxs = Selector(response)

        item = NewsItem()

        item['url'] = response.url
        item['source'] = 'example'
        item['title'] = hxs.xpath('//title/text()')
        item['date'] = hxs.xpath('//div[@class="article_info_pos"]/span/text()')

        yield item
like image 430
BadgerBadgerBadger Avatar asked May 16 '14 23:05

BadgerBadgerBadger


People also ask

Can Scrapy handle Ajax?

Ajax is just an asynchronous request that can be easily replicated with scrapy or anything else for that matter. It's true however, that you can use something like selenium to render the page with all of the ajax requests and bells and whistles if you are looking for lazy, do-it-all approach.

Can Scrapy Scrape JavaScript?

Executing JavaScript in Scrapy with ScrapingBee ScrapingBee is a web scraping API that handles headless browsers and proxies for you. ScrapingBee uses the latest headless Chrome version and supports JavaScript scripts. Like the other two middlewares, you can simply install the scrapy-scrapingbee middleware with pip.

How do you click with Scrapy?

You cannot click a button with Scrapy. You can send requests & receive a response. It's upto you to interpret the response with a separate javascript engine.


1 Answers

Crawl spider may be too limited for your purposes here. If you need a lot of logic you are usually better off inheriting from Spider.

Scrapy provides CloseSpider exception that can be raised when you need to stop parsing under certain conditions. The page you are crawling returns a message "There are no Focus articles on your stocks", when you exceed maximum page, you can check for this message and stop iteration when this message occurs.

In your case you can go with something like this:

from scrapy.spider import Spider
from scrapy.http import Request
from scrapy.exceptions import CloseSpider

class ExampleSite(Spider):
    name = "so"
    download_delay = 0.1

    more_pages = True
    next_page = 1

    start_urls = ['http://example.com/account/ajax_headlines_content?type=in_focus_articles&page=0'+
                      '&slugs=tsla&is_symbol_page=true']

    allowed_domains = ['example.com']

    def create_ajax_request(self, page_number):
        """
        Helper function to create ajax request for next page.
        """
        ajax_template = 'http://example.com/account/ajax_headlines_content?type=in_focus_articles&page={pagenum}&slugs=tsla&is_symbol_page=true'

        url = ajax_template.format(pagenum=page_number)
        return Request(url, callback=self.parse)

    def parse(self, response):
        """
        Parsing of each page.
        """
        if "There are no Focus articles on your stocks." in response.body:
            self.log("About to close spider", log.WARNING)
            raise CloseSpider(reason="no more pages to parse")


        # there is some content extract links to articles
        sel = Selector(response)
        links_xpath = "//div[@class='symbol_article']/a/@href"
        links = sel.xpath(links_xpath).extract()
        for link in links:
            url = urljoin(response.url, link)
            # follow link to article
            # commented out to see how pagination works
            #yield Request(url, callback=self.parse_item)

        # generate request for next page
        self.next_page += 1
        yield self.create_ajax_request(self.next_page)

    def parse_item(self, response):
        """
        Parsing of each article page.
        """
        self.log("Scraping: %s" % response.url, level=log.INFO)

        hxs = Selector(response)

        item = NewsItem()

        item['url'] = response.url
        item['source'] = 'example'
        item['title'] = hxs.xpath('//title/text()')
        item['date'] = hxs.xpath('//div[@class="article_info_pos"]/span/text()')

        yield item
like image 84
Pawel Miech Avatar answered Sep 18 '22 19:09

Pawel Miech