Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to send cookie with scrapy CrawlSpider requests?

I am trying to create this Reddit scraper using Python's Scrapy framework.

I have used the CrawSpider to crawl through Reddit and its subreddits. But, when I come across pages that have adult content, the site asks for a cookie over18=1.

So, I have been trying to send a cookie with every request that the spider makes, but, its not working out.

Here, is my spider code. As you can see I tried to add a cookie with every spider request using the start_requests() method.

Could anyone here tell me how to do this? Or what I have been doing wrong?

from scrapy import Spider
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from reddit.items import RedditItem
from scrapy.http import Request, FormRequest

class MySpider(CrawlSpider):
    name = 'redditscraper'
    allowed_domains = ['reddit.com', 'imgur.com']
    start_urls = ['https://www.reddit.com/r/nsfw']

    rules = (
        Rule(LinkExtractor(
            allow=['/r/nsfw/\?count=\d*&after=\w*']),
            callback='parse_item',
            follow=True),
    )

    def start_requests(self):
        for i,url in enumerate(self.start_urls):
            print(url)
            yield Request(url,cookies={'over18':'1'},callback=self.parse_item)

    def parse_item(self, response):
        titleList = response.css('a.title')

        for title in titleList:
            item = RedditItem()
            item['url'] = title.xpath('@href').extract()
            item['title'] = title.xpath('text()').extract()
            yield item
like image 783
Parthapratim Neog Avatar asked Sep 17 '15 05:09

Parthapratim Neog


People also ask

How do you use Scrapy request?

Scrapy can crawl websites using the Request and Response objects. The request objects pass over the system, uses the spiders to execute the request and get back to the request when it returns a response object.

How do I make a Scrapy request?

Making a request is a straightforward process in Scrapy. To generate a request, you need the URL of the webpage from which you want to extract useful data. You also need a callback function. The callback function is invoked when there is a response to the request.

How do you pass meta in Scrapy?

Essentially, I had to connect to the database, get the url and product_id then scrape the URL while passing its product id. All these had to be done in start_requests because that is the function scrapy invokes to request urls. This function has to return a Request object.

What does Scrapy request return?

Scrapy uses Request and Response objects for crawling web sites. Typically, Request objects are generated in the spiders and pass across the system until they reach the Downloader, which executes the request and returns a Response object which travels back to the spider that issued the request.


2 Answers

Okay. Try doing something like this.

def start_requests(self):
    headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.85 Safari/537.36'}
    for i,url in enumerate(self.start_urls):
        yield Request(url,cookies={'over18':'1'}, callback=self.parse_item, headers=headers)

It's the User-Agent which blocks you.

Edit:

Don't know what's wrong with CrawlSpider but Spider could work anyway.

#!/usr/bin/env python
# encoding: utf-8
import scrapy


class MySpider(scrapy.Spider):
    name = 'redditscraper'
    allowed_domains = ['reddit.com', 'imgur.com']
    start_urls = ['https://www.reddit.com/r/nsfw']

    def request(self, url, callback):
        """
         wrapper for scrapy.request
        """
        request = scrapy.Request(url=url, callback=callback)
        request.cookies['over18'] = 1
        request.headers['User-Agent'] = (
            'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, '
            'like Gecko) Chrome/45.0.2454.85 Safari/537.36')
        return request

    def start_requests(self):
        for i, url in enumerate(self.start_urls):
            yield self.request(url, self.parse_item)

    def parse_item(self, response):
        titleList = response.css('a.title')

        for title in titleList:
            item = {}
            item['url'] = title.xpath('@href').extract()
            item['title'] = title.xpath('text()').extract()
            yield item
        url = response.xpath('//a[@rel="nofollow next"]/@href').extract_first()
        if url:
            yield self.request(url, self.parse_item)
        # you may consider scrapy.pipelines.images.ImagesPipeline :D
like image 82
esfy Avatar answered Oct 17 '22 02:10

esfy


The Scrapy Docs

1.Using a dict:

request_with_cookies = Request(url="http://www.example.com",
                               cookies={'currency': 'USD', 'country': 'UY'})

2.Using a list of dicts:

request_with_cookies = Request(url="http://www.example.com",
                               cookies=[{'name': 'currency',
                                        'value': 'USD',
                                        'domain': 'example.com',
                                        'path': '/currency'}])
like image 26
CTD Avatar answered Oct 17 '22 02:10

CTD