Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

scrapy how spider returns value to another spider

The website that I am crawling contains many players and when I click on any player, I can go the his page.

The website structure is like this:

<main page>
<link to player 1>
<link to player 2>
<link to player 3>
..
..
..
<link to payer n>
</main page>

And when I click on any link, I go to player's page which is like this:

<player name>
<player team>
<player age>
<player salary>
<player date>

I want to scrap all the players those age is between 20 and 25 years.

what I am doing

  1. scraping the main page using first spider.

  2. getting links using first spider.

  3. crawl each link using second spider.

  4. get the player informatoin using second spider.

  5. save this information in json file using pipeline.

my question

how can I return the date value from second spider to the first spider

what i have tried

I build my own middelware and i override the process_spider_output. it allows me to print the request but I don't know what else should I do in order to return that date value to my first spider

any help is appreciated

Edit

Here is some of the code:

def parse(self, response):
        sel = Selector(response)
        Container = sel.css('div[MyDiv]')
        for player in Container:
            extract LINK and TITLE
            yield Request(LINK, meta={'Title': Title}, callback = self.parsePlayer)

def parsePlayer(self,response):
    player = new PlayerItem();
    extract DATE
    return player

I gave you the general code, not the very specific details in order to make it easy for you

like image 557
Marco Dinatsoli Avatar asked Feb 07 '14 13:02

Marco Dinatsoli


People also ask

How do you run multiple spiders in a Scrapy?

We use the CrawlerProcess class to run multiple Scrapy spiders in a process simultaneously. We need to create an instance of CrawlerProcess with the project settings. We need to create an instance of Crawler for the spider if we want to have custom settings for the Spider.

What does Scrapy request return?

Scrapy uses Request and Response objects for crawling web sites. Typically, Request objects are generated in the spiders and pass across the system until they reach the Downloader, which executes the request and returns a Response object which travels back to the spider that issued the request.

Is Scrapy better than Beautifulsoup?

Due to the built-in support for generating feed exports in multiple formats, as well as selecting and extracting data from various sources, the performance of Scrapy can be said to be faster than Beautiful Soup. Working with Beautiful Soup can speed up with the help of Multithreading process.


1 Answers

You want to discard players outside a range of dates

All you need to do is check the date in parsePlayer, and return only the relevant.

def parsePlayer(self,response):
    player = new PlayerItem();
    extract DATE
    if DATE == some_criteria:
        yield player

You want to scrap every link in order and stop when some date is reached

For example, if you have performance issues (you are scrapping way too much links and you don't need the ones after some limit).

Given that Scrapy work in asymmetric requests, there is no real good way to do that. The only way you have is trying to force linear behavior instead of default parallel requests.

Let me explain. When you have two callbacks like that, on default behavior scrapy will first parse the first page (main page) and put in its queue all requests for the player pages. Without waiting for that first page to finish being scrapped, it will start treating these requests for player pages (not necessarily in the order it found them).

Therefore, when you get the information that the player page p is out of date, it has already sent internal requests for p+1, p+2...p+m (m is basically a random number) AND has probably started treating some of these requests. Possibly even p+1 before p (no fixed order, remember).

So no way to stop exactly at the right page if you keep this pattern, and no way to interact with parse from parsePlayer.

What you can do is force it to follow the links in order, so that you have full control. The drawback is that it will take a big toll on performance: if scrapy follows each link one after the other, it means it can't treat them simultaneously as it usually does and it slows things down.

The code could be something like:

def parse(self, response):
    sel = Selector(response)
    self.container = sel.css('div[MyDiv]')
    return self.increment(0)

# Function that will yield the request for player n°index
def increment(index):
    player = self.container[index] # select current player
    extract LINK and TITLE
    yield Request(LINK, meta={'Title': Title, 'index': index}, callback=self.parsePlayer)

def parsePlayer(self,response):
    player = new PlayerItem();
    extract DATE
    yield player

    if DATE == some_criteria:
        index = response.meta['index'] + 1 
        self.increment(index)

That way scrapy will get the main page, then the first player, then the main page, then the second player, then the main, etc... until it finds a date that doesn't fit the criteria. Then there is no callback to the main function and the spider stops.

This gets a little more complex if you have to also increment the index of the main page (if there are n main pages for example), but the idea stays the same.

like image 133
Robin Avatar answered Oct 05 '22 05:10

Robin