Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Scrapy: USER_AGENT and ROBOTSTXT_OBEY are properly set, but I still get error 403

Hello and thanks in advance for the help or direction you can bring. This is my scraper:

import scrapy    
class RakutenSpider(scrapy.Spider):
    name = "rak"
    allowed_domains = ["rakuten.com"]
    start_urls = ['https://www.rakuten.com/deals?omadtrack=hp_deals_viewmore']
    def parse(self, response):
        for sel in response.xpath('//div[@class="page-bottom"]/div'):
            yield {
                'titles': sel.xpath("//div[@class='slider-prod-title']").extract_first(),
                'prices': sel.xpath("//span[@class='price-bold']").extract_first(),
                'images': sel.xpath("//div[@class='deal-img']/img").extract_first()
            }

And this is part of my settings.py

USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36'
CONCURRENT_REQUESTS = 1
DOWNLOAD_DELAY = 5
# Obey robots.txt rules
ROBOTSTXT_OBEY = 'False'

and this is part of the log:

DEBUG: Crawled (403) <GET https://www.rakuten.com/deals?omadtrack=hp_deals_viewmore> (referer: None)

I have tried almost all solutions I found in s/o


log file: This is a new log after installing Firefox driver. Now I get an ERROR: Error downloading https://www.rakuten.com/deals?omadtrack=hp_deals_viewmore>

2017-11-17 00:38:45 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: scrapybot)
2017-11-17 00:38:45 [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'deals.spiders', 'CONCURRENT_REQUESTS': 1, 'SPIDER_MODULES': ['deals.spiders'], 'USER_AGENT': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36', 'TELNETCONSOLE_ENABLED': False, 'DOWNLOAD_DELAY': 5}
2017-11-17 00:38:45 [py.warnings] WARNING: :0: UserWarning: You do not have a working installation of the service_identity module: 'No module named cryptography.x509'.  Please install it from <https://pypi.python.org/pypi/service_identity> and make sure all of its dependencies are satisfied.  Without the service_identity module and a recent enough pyOpenSSL to support it, Twisted can perform only rudimentary TLS client hostname verification.  Many valid certificate/hostname mappings may be rejected.

2017-11-17 00:38:45 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.feedexport.FeedExporter',
 'scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.logstats.LogStats',
 'scrapy.extensions.corestats.CoreStats']
2017-11-17 00:38:45 [scrapy.middleware] INFO: Enabled downloader middlewares:
['deals.middlewares.JSMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-11-17 00:38:45 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-11-17 00:38:45 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2017-11-17 00:38:45 [scrapy.core.engine] INFO: Spider opened
2017-11-17 00:38:45 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-11-17 00:38:45 [scrapy.core.scraper] ERROR: Error downloading <GET https://www.rakuten.com/deals?omadtrack=hp_deals_viewmore>
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py", line 1128, in _inlineCallbacks
    result = g.send(result)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/core/downloader/middleware.py", line 37, in process_request
    response = yield method(request=request, spider=spider)
  File "/home/seealldeals/tmp/scrapy/deals/deals/middlewares.py", line 63, in process_request
    driver = webdriver.Firefox()
  File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/firefox/webdriver.py", line 144, in __init__
    self.service.start()
  File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/common/service.py", line 74, in start
    stdout=self.log_file, stderr=self.log_file)
  File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
    errread, errwrite)
  File "/usr/lib/python2.7/subprocess.py", line 1327, in _execute_child
    raise child_exception
OSError: [Errno 8] Exec format error
2017-11-17 00:38:45 [scrapy.core.engine] INFO: Closing spider (finished)
2017-11-17 00:38:45 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/exception_count': 1,
 'downloader/exception_type_count/exceptions.OSError': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2017, 11, 17, 5, 38, 45, 328366),
 'log_count/ERROR': 1,
 'log_count/INFO': 7,
 'log_count/WARNING': 1,
 'memusage/max': 33509376,
 'memusage/startup': 33509376,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2017, 11, 17, 5, 38, 45, 112667)}
2017-11-17 00:38:45 [scrapy.core.engine] INFO: Spider closed (finished)
like image 556
Gratous Avatar asked Oct 24 '25 14:10

Gratous


1 Answers

What's wrong

  • rakuten.com had integrated with Google Analytics which has anti-spider feature.
  • If your request can't process rakuten.com's analytics.js properly, you will be blocked from the site and have a 403 error code.

How to fix it

Use Javascript rendering technique

  • Solution 1: (Integrate scrapy with scrapy-splash)

    • Here is Scrapy-splash github repository
    • Install scrapy-splash from pypi:

      pip install scrapy-splash
      
    • Install Docker to your machine
    • Run a scrapy-splash container:

      docker run -p 8050:8050 scrapinghub/splash
      
    • Add following lines to your settings.py

      SPLASH_URL = 'http://192.168.59.103:8050'
      
    • Append splash download middleware to your settings.py

      DOWNLOADER_MIDDLEWARES = {
          'scrapy_splash.SplashCookiesMiddleware': 723,
          'scrapy_splash.SplashMiddleware': 725,
          'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
      }   
      
    • Change your spider's code to

      import scrapy    
      from scrapy_splash import SplashRequest
      
      
      class RakutenSpider(scrapy.Spider):
          name = "rak"
          allowed_domains = ["rakuten.com"]
          start_urls = ['https://www.rakuten.com/deals?omadtrack=hp_deals_viewmore']
      
          def start_requests(self):
              for url in self.start_urls:
                  yield SplashRequest(url, self.parse, args={'wait': 0.5})
      
          def parse(self, response):
              for sel in response.xpath('//div[@class="page-bottom"]/div'):
                  yield {
                      'titles': sel.xpath("//div[@class='slider-prod-title']").extract_first(),
                      'prices': sel.xpath("//span[@class='price-bold']").extract_first(),
                      'images': sel.xpath("//div[@class='deal-img']/img").extract_first()
                  }
      
  • Solution 2: (Integrate scrapy with selenium webdriver as a middleware)

    • Selenium web driver python binding documentation
    • Install Selenium from pypi:

      pip install selenium
      
    • If you want to use Firefox Browser, Install Firefox's Geckodriver to your PATH.
      • Download Mozilla Geckodriver here
    • If you want to use Chrome Browser, Install Chrome driver to your PATH.
      • Download Chromedriver
    • If you want to use PhantomJS Browser, Install phantomJS from Homebrew.

         brew install phantomjs
      
    • Add a JSmiddleware class to your middlewares.py

          from scrapy.http import HtmlResponse
          from selenium import webdriver
      
      
          class JSMiddleware(object):
              def process_request(self, request, spider):
                  driver = webdriver.Firefox()
                  driver.get(request.url)
      
                  body = driver.page_source
                  return HtmlResponse(driver.current_url, body=body, encoding='utf-8', request=request)
      
    • Append selenium download middleware to your settings.py

          DOWNLOADER_MIDDLEWARES = {
              'youproject.middlewares.JSMiddleware': 200
          }
      
    • Use your original spider's code

          import scrapy    
      
      
          class RakutenSpider(scrapy.Spider):
              name = "rak"
              allowed_domains = ["rakuten.com"]
              start_urls = ['https://www.rakuten.com/deals?omadtrack=hp_deals_viewmore']
      
              def parse(self, response):
                  for sel in response.xpath('//div[@class="page-bottom"]/div'):
                      yield {
                          'titles': sel.xpath("//div[@class='slider-prod-title']").extract_first(),
                          'prices': sel.xpath("//span[@class='price-bold']").extract_first(),
                          'images': sel.xpath("//div[@class='deal-img']/img").extract_first()
                      }
      

More

  • If you wan to use Chrome Browser with Headless mode, check this tutorial

Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!