Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

scrapy crawler caught exception reading instance data

I am new to python and want to use scrapy to build a web crawler. I go through the tutorial in http://blog.siliconstraits.vn/building-web-crawler-scrapy/. Spider code likes following:

from scrapy.spider         import BaseSpider
from scrapy.selector         import HtmlXPathSelector
from nettuts.items        import NettutsItem
from scrapy.http        import Request

class MySpider(BaseSpider):
     name         = "nettuts"
     allowed_domains    = ["net.tutsplus.com"]
     start_urls    = ["http://net.tutsplus.com/"]

def parse(self, response):
    hxs     = HtmlXPathSelector(response)
    titles     = hxs.select('//h1[@class="post_title"]/a/text()').extract()
    for title in titles:
        item = NettutsItem()
        item["title"] = title
        yield item

When launch the spider with command line: scrapy crawl nettus, it has following error:

[boto] DEBUG: Retrieving credentials from metadata server.
2015-07-05 18:27:17 [boto] ERROR: Caught exception reading instance data

Traceback (most recent call last):
  File "/anaconda/lib/python2.7/site-packages/boto/utils.py", line 210, in retry_url
    r = opener.open(req, timeout=timeout)

 File "/anaconda/lib/python2.7/urllib2.py", line 431, in open
response = self._open(req, data)

 File "/anaconda/lib/python2.7/urllib2.py", line 449, in _open
'_open', req)

 File "/anaconda/lib/python2.7/urllib2.py", line 409, in _call_chain
result = func(*args)

 File "/anaconda/lib/python2.7/urllib2.py", line 1227, in http_open
return self.do_open(httplib.HTTPConnection, req)

File "/anaconda/lib/python2.7/urllib2.py", line 1197, in do_open
raise URLError(err)

URLError: <urlopen error [Errno 65] No route to host>
2015-07-05 18:27:17 [boto] ERROR: Unable to read instance data, giving up

really do not know what's wrong. Hope somebody could help

like image 587
printemp Avatar asked Jul 05 '15 16:07

printemp


1 Answers

in the settings.py file: add following code settings:

DOWNLOAD_HANDLERS = {'s3': None,}

like image 144
printemp Avatar answered Sep 28 '22 06:09

printemp