I am new to Scrapy and I am working on a scraping exercise and I am using the CrawlSpider. Although the Scrapy framework works beautifully and it follows the relevant links, I can't seem to make the CrawlSpider to scrape the very first link (the home page / landing page). Instead it goes directly to scrape the links determined by the rule but doesn't scrape the landing page on which the links are. I don't know how to fix this since it is not recommended to overwrite the parse method for a CrawlSpider. Modifying follow=True/False also doesn't yield any good results. Here is the snippet of code:
class DownloadSpider(CrawlSpider):
name = 'downloader'
allowed_domains = ['bnt-chemicals.de']
start_urls = [
"http://www.bnt-chemicals.de"
]
rules = (
Rule(SgmlLinkExtractor(aloow='prod'), callback='parse_item', follow=True),
)
fname = 1
def parse_item(self, response):
open(str(self.fname)+ '.txt', 'a').write(response.url)
open(str(self.fname)+ '.txt', 'a').write(','+ str(response.meta['depth']))
open(str(self.fname)+ '.txt', 'a').write('\n')
open(str(self.fname)+ '.txt', 'a').write(response.body)
open(str(self.fname)+ '.txt', 'a').write('\n')
self.fname = self.fname + 1
Just change your callback to parse_start_url
and override it:
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
class DownloadSpider(CrawlSpider):
name = 'downloader'
allowed_domains = ['bnt-chemicals.de']
start_urls = [
"http://www.bnt-chemicals.de",
]
rules = (
Rule(SgmlLinkExtractor(allow='prod'), callback='parse_start_url', follow=True),
)
fname = 0
def parse_start_url(self, response):
self.fname += 1
fname = '%s.txt' % self.fname
with open(fname, 'w') as f:
f.write('%s, %s\n' % (response.url, response.meta.get('depth', 0)))
f.write('%s\n' % response.body)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With