Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Calling Scrapy from another file without threading

I have to call the crawler from another python file, for which I use the following code.

def crawl_koovs():
    spider = SomeSpider()
    settings = get_project_settings()
    crawler = Crawler(settings)
    crawler.signals.connect(reactor.stop, signal=signals.spider_closed)
    crawler.configure()
    crawler.crawl(spider)
    crawler.start()
    log.start()
    reactor.run()

On running this, I get the error as

exceptions.ValueError: signal only works in main thread

The only workaround I could find is to use

reactor.run(installSignalHandlers=False)

which I don't want to use as I want to call this method multiple times and want reactor to be stopped before the next call. What can I do to make this work (maybe force the crawler to start in the same 'main' thread)?

like image 560
Pravesh Jain Avatar asked May 13 '15 09:05

Pravesh Jain


1 Answers

The first thing I would say to you is when you're executing Scrapy from external file the loglevel is set to INFO,you should change it to DEBUG to see what's happening if your code doesn't work

you should change the line:

 log.start()

for:

log.start(loglevel=log.DEBUG)

To store everything in the log and generate a text file (for debugging purposes) you can do:

log.start(logfile="file.log", loglevel=log.DEBUG, crawler=crawler, logstdout=False)

About the signals issue with the log level changed to DEBUG maybe you can see some output that can help you to fix it, you can try to put your script into the Scrapy Project folder to see if still crashes.

If you change the line:

crawler.signals.connect(reactor.stop, signal=signals.spider_closed)

for:

dispatcher.connect(reactor.stop, signals.spider_closed)

What does it say ?

Depending on your Scrapy version it may be deprecated

like image 89
AlvaroAV Avatar answered Nov 12 '22 00:11

AlvaroAV