I wanna to fetch web pages under different domain, that means I have to use different spider under the command "scrapy crawl myspider". However, I have to use different pipeline logic to put the data into database since the content of web pages are different. But for every spider, they have to go through all of the pipelines which defined in settings.py. Is there have other elegant method to using seperate pipelines for each spider?
ITEM_PIPELINES
setting is defined globally for all spiders in the project during the engine start. It cannot be changed per spider on the fly.
Here are some options to consider:
Change the code of pipelines. Skip/continue processing items returned by spiders in the process_item
method of your pipeline, e.g.:
def process_item(self, item, spider):
if spider.name not in ['spider1', 'spider2']:
return item
# process item
Change the way you start crawling. Do it from a script, based on spider name passed as a parameter, override your ITEM_PIPELINES
setting before calling crawler.configure()
.
See also:
Hope that helps.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With