Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Scrapy CrawlerProcess Not Saving Data with CrawlSpider

The following code will execute, create the file with no errors. However, it is not saving to the json file.

I turned off autothrottle, which in the past has interfered with downloading data, but it didn't fix the issue.

Scrapy==1.4.0

class MySpider(CrawlSpider):
    name = "spidy"
    allowed_domains = ["cnn.com"]
    start_urls = ["http://www.cnn.com"]    

    rules = [Rule(LinkExtractor(allow=['cnn.com/.+']), callback='parse_item', follow=True)]    

    def parse_item(self, response):

        print('went to: {}'.format(response.url))

        yield {'url': response.url}         

FILE_NAME = 'my_data.json'
SETTINGS = {
            'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)',
            'FEED_FORMAT': 'json',
            'FEED_URI': FILE_NAME,          
            } 

process = CrawlerProcess(SETTINGS)
process.crawl(MySpider)
process.start() 

EDIT:

The scraper is getting the data as seen in the log:

2017-11-21 11:07:55 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: scrapybot)
2017-11-21 11:07:55 [scrapy.utils.log] INFO: Overridden settings: {'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)', 'FEED_URI': 'my_data.json', 'FEED_FORMAT': 'json'}
2017-11-21 11:07:55 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.logstats.LogStats',
 'scrapy.extensions.feedexport.FeedExporter']
2017-11-21 11:07:55 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-11-21 11:07:55 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-11-21 11:07:55 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2017-11-21 11:07:55 [scrapy.core.engine] INFO: Spider opened
2017-11-21 11:07:55 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-11-21 11:07:55 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6041
2017-11-21 11:07:55 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com> (referer: None)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/us> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/specials/politics/congress-capitol-hill> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/specials/politics/president-donald-trump-45> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/specials/politics/us-security> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/specials/politics/trumpmerica> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/specials/politics/state-cnn-politics-magazine> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/specials/opinion/opinion-social-issues> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/specials/opinions/cnnireport> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/specials/vr/vr-archives> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/middle-east> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET http://imagesource.cnn.com> from <GET http://www.cnn.com/collection>
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.cnn.com/specials/politics/supreme-court-nine> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET http://transcripts.cnn.com/TRANSCRIPTS/> from <GET http://www.cnn.com/transcripts>
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://money.cnn.com/pf/> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://money.cnn.com/luxury/> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://money.cnn.com/data/markets/> (referer: http://www.cnn.com)
2017-11-21 11:07:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://money.cnn.com/technology/> (referer: http://www.cnn.com)
went to: http://www.cnn.com/us
2017-11-21 11:07:56 [scrapy.core.scraper] DEBUG: Scraped from <200 http://www.cnn.com/us>
{'url': 'http://www.cnn.com/us'}
2017-11-21 11:07:56 [scrapy.dupefilters] DEBUG: Filtered duplicate request: <GET http://www.cnn.com/us> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all duplicates)
2017-11-21 11:07:56 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET http://www.cnn.com/email/subscription> from <GET http://www.cnn.com/newsletters>
... 

We can see the scraper is visiting the URLs, crawling additional urls on the page, getting the response URL (as see with "went to: ") then returning the data with "{'url':}, e.g. {'url': 'http://www.cnn.com/us'}

like image 939
ethanenglish Avatar asked Mar 13 '26 18:03

ethanenglish


1 Answers

So your code as such works fine, but I assume you stop it twice or kill it which makes the json blank. I would change two things.

One use jsonlines instead of json. This would make sure that even if I kill the spider I won't loose too many items. Then each line itself is a valid JSON, so I can append to same file. Also if you break the program in between you will get a invalid JSON

Second, I would set the concurrent items to a lower value so items are exported more often (Default value is 100)

from scrapy.crawler import CrawlerProcess
from scrapy.spiders import CrawlSpider, Rule

from scrapy.linkextractor import LinkExtractor

class MySpider(CrawlSpider):
    name = "spidy"
    allowed_domains = ["cnn.com"]
    start_urls = ["http://www.cnn.com"]

    rules = [Rule(LinkExtractor(allow=['cnn.com/.+']), callback='parse_item', follow=True)]

    def parse_item(self, response):

        print('went to: {}'.format(response.url))

        yield {'url': response.url}

FILE_NAME = 'my_data.jsonl'
SETTINGS = {
            'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)',
            'FEED_FORMAT': 'jsonlines',
            'FEED_URI': FILE_NAME,
            'CONCURRENT_ITEMS': 1
            }

process = CrawlerProcess(SETTINGS)
process.crawl(MySpider)
process.start()

After that you will find the data does get exported fine

items exported

like image 54
Tarun Lalwani Avatar answered Mar 16 '26 07:03

Tarun Lalwani



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!