The crawler works fine by command line by gives this error:
2016-03-30 03:47:59 [scrapy] INFO: Scrapy 1.0.5 started (bot: scrapybot)
2016-03-30 03:47:59 [scrapy] INFO: Optional features available: ssl, http11
2016-03-30 03:47:59 [scrapy] INFO: Overridden settings: {'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'}
Traceback (most recent call last):
File "/home/ahmeds/scrapProject/crawler/startcrawls.py", line 11, in <module>
process.crawl(onioncrawl)
File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 150, in crawl
crawler = self._create_crawler(crawler_or_spidercls)
File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 166, in _create_crawler
return Crawler(spidercls, self.settings)
File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 32, in __init__
self.spidercls.update_settings(self.settings)
AttributeError: 'module' object has no attribute 'update_settings'
This is my code for running my crawler by script as per latest documentation. My scrapy version is 1.0.5.
from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings
from crawler.spiders import onioncrawl
setting = get_project_settings()
process = CrawlerProcess(setting)
process.crawl(onioncrawl)
process.start()
I was using Spider filename instead of Spider class name.
You can try
process.crawl(onioncrawl.<ClassName>).
Replace the ClassName with the real class name in your onioncrawl module
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With