Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Easiest way to get a http.response object in Scrapy

Tags:

python

scrapy

I'm new to Scrapy and I am trying to "get the content of a web page" into a response object (if I correctly understood).

I am following http://doc.scrapy.org/en/latest/topics/selectors.html, which however works in scrapy shell. I would like to make it work in python code directly.

I wrote the code to scrap http://doc.scrapy.org/en/latest/_static/selectors-sample1.html

import scrapy
from scrapy.http import HtmlResponse
URL = 'http://doc.scrapy.org/en/latest/_static/selectors-sample1.html'
response = HtmlResponse(url=URL)    
print response.selector.xpath('//title/text()')

and the output is

>> []

Why Can't I get the proper value for title? It seems that HtmlResponse() is not downloading data from the web... why? How can I fix!

Tnx u very much!

Cap

like image 298
Mike Avatar asked Dec 24 '22 04:12

Mike


1 Answers

Your statement

response = HtmlResponse(url=URL)

only builds a "local scope" HtmlResponse object, with an empty body. It does not download anything, and especially not the resource at http://doc.scrapy.org/en/latest/_static/selectors-sample1.html.

In Scrapy, you don't usually build HtmlResponse objects yourself, you let Scrapy framework construct them for you, when it has finished processing a Request instance you gave it, e.g. Request(url='http://doc.scrapy.org/en/latest/_static/selectors-sample1.html')

If you are trying out Scrapy, I suggest you play with scrapy shell: inside the interactive shell, you can trigger downloads (and get "real" Response objects to work with) using fetch('http://someurl'):

$ scrapy shell
2016-06-14 10:59:31 [scrapy] INFO: Scrapy 1.1.0 started (bot: scrapybot)
(...)
[s] Available Scrapy objects:
[s]   crawler    <scrapy.crawler.Crawler object at 0x7f1a6591d588>
[s]   item       {}
[s]   settings   <scrapy.settings.Settings object at 0x7f1a6ce290f0>
[s] Useful shortcuts:
[s]   shelp()           Shell help (print this help)
[s]   fetch(req_or_url) Fetch request (or URL) and update local objects
[s]   view(response)    View response in a browser
>>> fetch('http://doc.scrapy.org/en/latest/_static/selectors-sample1.html')
2016-06-14 10:59:51 [scrapy] INFO: Spider opened
2016-06-14 10:59:51 [scrapy] DEBUG: Crawled (200) <GET http://doc.scrapy.org/en/latest/_static/selectors-sample1.html> (referer: None)
>>> response.xpath('//title/text()').extract()
['Example website']

Outside the shell, to actually download data, you need to:

  • subclass scrapy.Spider,
  • define URLs where to begin downloading from,
  • and write callback methods to work on downloaded data, wrapped inside Response objects that get passed to them

A very simple example (in a file called, say, test.py:

import scrapy


class TestSpider(scrapy.Spider):

    name = 'testspider'

    # start_urls is special and internally it builds Request objects for each of the URLs listed
    start_urls = ['http://doc.scrapy.org/en/latest/_static/selectors-sample1.html']

    def parse(self, response):
        yield {
            'title': response.xpath('//h1/text()').extract_first()
        }

Then you need to run the spider. Scrapy has a command for running single-file spiders:

$ scrapy runspider test.py 

And you get this in your console:

2016-06-14 10:48:05 [scrapy] INFO: Scrapy 1.1.0 started (bot: scrapybot)
2016-06-14 10:48:05 [scrapy] INFO: Overridden settings: {}
2016-06-14 10:48:06 [scrapy] INFO: Enabled extensions:
['scrapy.extensions.logstats.LogStats', 'scrapy.extensions.corestats.CoreStats']
2016-06-14 10:48:06 [scrapy] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2016-06-14 10:48:06 [scrapy] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2016-06-14 10:48:06 [scrapy] INFO: Enabled item pipelines:
[]
2016-06-14 10:48:06 [scrapy] INFO: Spider opened
2016-06-14 10:48:06 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-06-14 10:48:06 [scrapy] DEBUG: Crawled (200) <GET http://doc.scrapy.org/en/latest/_static/selectors-sample1.html> (referer: None)
2016-06-14 10:48:06 [scrapy] DEBUG: Scraped from <200 http://doc.scrapy.org/en/latest/_static/selectors-sample1.html>
{'title': 'Example website'}
2016-06-14 10:48:06 [scrapy] INFO: Closing spider (finished)
2016-06-14 10:48:06 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 252,
 'downloader/request_count': 1,
 'downloader/request_method_count/GET': 1,
 'downloader/response_bytes': 501,
 'downloader/response_count': 1,
 'downloader/response_status_count/200': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2016, 6, 14, 8, 48, 6, 564591),
 'item_scraped_count': 1,
 'log_count/DEBUG': 2,
 'log_count/INFO': 7,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2016, 6, 14, 8, 48, 6, 85693)}
2016-06-14 10:48:06 [scrapy] INFO: Spider closed (finished)

If you really want to play with selectors, without actually downloading any web data, assuming you have the data already locally (for example copying from view-source: in your browser), you can do that but you need to supply the body:

>>> response = HtmlResponse(url=URL, body='''
... <!DOCTYPE html>
... <html>
...   <head>
...   </head>
...   <body>
...       <h1>Herman Melville - Moby-Dick</h1>
... 
...       <div>
...         <p>
...           Availing himself of the mild, summer-cool weather that now reigned in these latitudes, ... them a care-killing competency.
...         </p>
...       </div>
...   </body>
... </html>''', encoding='utf8')
>>> response.xpath('//h1')
[<Selector xpath='//h1' data='<h1>Herman Melville - Moby-Dick</h1>'>]
>>> response.xpath('//h1').extract()
['<h1>Herman Melville - Moby-Dick</h1>']
>>> 
like image 162
paul trmbrth Avatar answered Dec 26 '22 20:12

paul trmbrth