Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Access session cookie in scrapy spiders

I am trying to access the session cookie within a spider. I first login to a social network using in a spider:

    def parse(self, response):

        return [FormRequest.from_response(response,
                formname='login_form',
                formdata={'email': '...', 'pass':'...'},
                callback=self.after_login)]

In after_login, I would like to access the session cookies, in order to pass them to another module (selenium here) to further process the page with an authentificated session.

I would like something like that:

     def after_login(self, response):

        # process response
        .....

        # access the cookies of that session to access another URL in the
        # same domain with the autehnticated session.
        # Something like:
        session_cookies = XXX.get_session_cookies()
        data = another_function(url,cookies)

Unfortunately, response.cookies does not return the session cookies.

How can I get the session cookies ? I was looking at the cookies middleware: scrapy.contrib.downloadermiddleware.cookies and scrapy.http.cookies but there doesn't seem to be any straightforward way to access the session cookies.

Some more details here bout my original question:

Unfortunately, I used your idea but I dind't see the cookies, although I know for sure that they exists since the scrapy.contrib.downloadermiddleware.cookies middleware does print out the cookies! These are exactly the cookies that I want to grab.

So here is what I am doing:

The after_login(self,response) method receives the response variable after proper authentication, and then I access an URL with the session data:

  def after_login(self, response):

        # testing to see if I can get the session cookies
        cookieJar = response.meta.setdefault('cookie_jar', CookieJar())
        cookieJar.extract_cookies(response, response.request)
        cookies_test = cookieJar._cookies
        print "cookies - test:",cookies_test

        # URL access with authenticated session
        url = "http://site.org/?id=XXXX"     
        request = Request(url=url,callback=self.get_pict)   
        return [request] 

As the output below shows, there are indeed cookies, but I fail to capture them with cookieJar:

cookies - test: {}
2012-01-02 22:44:39-0800 [myspider] DEBUG: Sending cookies to: <GET http://www.facebook.com/profile.php?id=529907453>
    Cookie: xxx=3..........; yyy=34.............; zzz=.................; uuu=44..........

So I would like to get a dictionary containing the keys xxx, yyy etc with the corresponding values.

Thanks :)

like image 522
mikolune Avatar asked Jan 03 '12 05:01

mikolune


People also ask

Does Scrapy manage cookies automatically?

When your spider crawls a website, Scrapy automatically handles the cookie for you, storing and sending it in subsequent requests to the same site.

What is a spider in Scrapy?

Spiders are classes which define how a certain site (or a group of sites) will be scraped, including how to perform the crawl (i.e. follow links) and how to extract structured data from their pages (i.e. scraping items).


2 Answers

A classic example is having a login server, which provides a new session id after a successful login. This new session id should be used with another request.

Here is the code picked up from source which seems to work for me.

print 'cookie from login', response.headers.getlist('Set-Cookie')[0].split(";")[0].split("=")[1]

Code:

def check_logged(self, response):
tmpCookie = response.headers.getlist('Set-Cookie')[0].split(";")[0].split("=")[1]
print 'cookie from login', response.headers.getlist('Set-Cookie')[0].split(";")[0].split("=")[1]
cookieHolder=dict(SESSION_ID=tmpCookie)

#print response.body
if "my name" in response.body:
    yield Request(url="<<new url for another server>>",   
        cookies=cookieHolder,
        callback=self."<<another function here>>")
else:
    print "login failed"
        return 
like image 164
Ravi Ramadoss Avatar answered Sep 27 '22 19:09

Ravi Ramadoss


Maybe this is an overkill, but i don't know how are you going to use those cookies, so it might be useful (an excerpt from real code - adapt it to your case):

from scrapy.http.cookies import CookieJar

class MySpider(BaseSpider):

    def parse(self, response):

        cookieJar = response.meta.setdefault('cookie_jar', CookieJar())
        cookieJar.extract_cookies(response, response.request)
        request = Request(nextPageLink, callback = self.parse2,
                      meta = {'dont_merge_cookies': True, 'cookie_jar': cookieJar})
        cookieJar.add_cookie_header(request) # apply Set-Cookie ourselves

CookieJar has some useful methods.

If you still don't see the cookies - maybe they are not there?


UPDATE:

Looking at CookiesMiddleware code:

class CookiesMiddleware(object):
    def _debug_cookie(self, request, spider):
        if self.debug:
            cl = request.headers.getlist('Cookie')
            if cl:
                msg = "Sending cookies to: %s" % request + os.linesep
                msg += os.linesep.join("Cookie: %s" % c for c in cl)
                log.msg(msg, spider=spider, level=log.DEBUG)

So, try request.headers.getlist('Cookie')

like image 22
warvariuc Avatar answered Sep 27 '22 18:09

warvariuc