Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Using Scrapy with authenticated (logged in) user session

Tags:

python

scrapy

In the Scrapy docs, there is the following example to illustrate how to use an authenticated session in Scrapy:

class LoginSpider(BaseSpider):     name = 'example.com'     start_urls = ['http://www.example.com/users/login.php']      def parse(self, response):         return [FormRequest.from_response(response,                     formdata={'username': 'john', 'password': 'secret'},                     callback=self.after_login)]      def after_login(self, response):         # check login succeed before going on         if "authentication failed" in response.body:             self.log("Login failed", level=log.ERROR)             return          # continue scraping with authenticated session... 

I've got that working, and it's fine. But my question is: What do you have to do to continue scraping with authenticated session, as they say in the last line's comment?

like image 397
Herman Schaaf Avatar asked May 01 '11 19:05

Herman Schaaf


1 Answers

In the code above, the FormRequest that is being used to authenticate has the after_login function set as its callback. This means that the after_login function will be called and passed the page that the login attempt got as a response.

It is then checking that you are successfully logged in by searching the page for a specific string, in this case "authentication failed". If it finds it, the spider ends.

Now, once the spider has got this far, it knows that it has successfully authenticated, and you can start spawning new requests and/or scrape data. So, in this case:

from scrapy.selector import HtmlXPathSelector from scrapy.http import Request  # ...  def after_login(self, response):     # check login succeed before going on     if "authentication failed" in response.body:         self.log("Login failed", level=log.ERROR)         return     # We've successfully authenticated, let's have some fun!     else:         return Request(url="http://www.example.com/tastypage/",                callback=self.parse_tastypage)  def parse_tastypage(self, response):     hxs = HtmlXPathSelector(response)     yum = hxs.select('//img')      # etc. 

If you look here, there's an example of a spider that authenticates before scraping.

In this case, it handles things in the parse function (the default callback of any request).

def parse(self, response):     hxs = HtmlXPathSelector(response)     if hxs.select("//form[@id='UsernameLoginForm_LoginForm']"):         return self.login(response)     else:         return self.get_section_links(response) 

So, whenever a request is made, the response is checked for the presence of the login form. If it is there, then we know that we need to login, so we call the relevant function, if it's not present, we call the function that is responsible for scraping the data from the response.

I hope this is clear, feel free to ask if you have any other questions!


Edit:

Okay, so you want to do more than just spawn a single request and scrape it. You want to follow links.

To do that, all you need to do is scrape the relevant links from the page, and spawn requests using those URLs. For example:

def parse_page(self, response):     """ Scrape useful stuff from page, and spawn new requests      """     hxs = HtmlXPathSelector(response)     images = hxs.select('//img')     # .. do something with them     links = hxs.select('//a/@href')      # Yield a new request for each link we found     for link in links:         yield Request(url=link, callback=self.parse_page) 

As you can see, it spawns a new request for every URL on the page, and each one of those requests will call this same function with their response, so we have some recursive scraping going on.

What I've written above is just an example. If you want to "crawl" pages, you should look into CrawlSpider rather than doing things manually.

like image 81
Acorn Avatar answered Oct 01 '22 02:10

Acorn