There are 10 links I want to catch
When I run spider,I can get the links in json file ,but there are still errors like this:
It seems like selenium run twice.What's the problem is?
Please guide me Thank you
2014-08-06 10:30:26+0800 [spider2] DEBUG: Scraped from <200 http://www.test/a/1>
{'link': u'http://www.test/a/1'}
2014-08-06 10:30:26+0800 [spider2] ERROR: Spider error processing <GET
http://www.test/a/1>
Traceback (most recent call last):
........
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py", line 571, in create_connection
raise err
socket.error: [Errno 61] Connection refused
Here is my code:
from selenium import webdriver
from scrapy.spider import Spider
from ta.items import TaItem
from selenium.webdriver.support.wait import WebDriverWait
from scrapy.http.request import Request
class ProductSpider(Spider):
name = "spider2"
start_urls = ['http://www.test.com/']
def __init__(self):
self.driver = webdriver.Firefox()
def parse(self, response):
self.driver.get(response.url)
self.driver.implicitly_wait(20)
next = self.driver.find_elements_by_css_selector("div.body .heading a")
for a in next:
item = TaItem()
item['link'] = a.get_attribute("href")
yield Request(url=item['link'], meta={'item': item}, callback=self.parse_detail)
def parse_detail(self,response):
item = response.meta['item']
yield item
self.driver.close()
The problem is that you are closing the driver too early.
You should close it only when the spider finishes it work, listen to spider_closed
signal:
from scrapy import signals
from scrapy.xlib.pydispatch import dispatcher
from selenium import webdriver
from scrapy.spider import Spider
from ta.items import TaItem
from scrapy.http.request import Request
class ProductSpider(Spider):
name = "spider2"
start_urls = ['http://www.test.com/']
def __init__(self):
self.driver = webdriver.Firefox()
dispatcher.connect(self.spider_closed, signals.spider_closed)
def parse(self, response):
self.driver.get(response.url)
self.driver.implicitly_wait(20)
next = self.driver.find_elements_by_css_selector("div.body .heading a")
for a in next:
item = TaItem()
item['link'] = a.get_attribute("href")
yield Request(url=item['link'], meta={'item': item}, callback=self.parse_detail)
def parse_detail(self,response):
item = response.meta['item']
yield item
def spider_closed(self, spider):
self.driver.close()
See also: scrapy: Call a function when a spider quits.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With