i have a scrapy pipelines.py and i want to get the given arguments. In my spider.py it works perfect:
class MySpider( CrawlSpider ):
def __init__(self, host='', domain_id='', *args, **kwargs):
super(MySpider, self).__init__(*args, **kwargs)
print user_id
...
Now, i need the "user_id" in my pipelines.py to create the sqlite database like "domain-123.db". I search the whole web about my problem, but i cant find any solution.
Can someone help me?
PS: Yes, i try'ed the super() function within my pipelines Class like the spyer.py, it dont work.
Set the arguments inside the spider
's constructor:
class MySpider(CrawlSpider):
def __init__(self, user_id='', *args, **kwargs):
self.user_id = user_id
super(MySpider, self).__init__(*args, **kwargs)
And read them in the open_spider()
method of your pipeline:
def open_spider(self, spider):
print spider.user_id
I may be too late to provide a useful answer to op but for anybody reaching this question in the future (as I did), you should check the classmethods from_crawler
and/or from_settings
.
This way you can pass your arguments the way you want.
Check: https://doc.scrapy.org/en/latest/topics/item-pipeline.html#from_crawler
from_crawler(cls, crawler)
If present, this classmethod is called to create a pipeline instance from a Crawler. It must return a new instance of the pipeline. Crawler object provides access to all Scrapy core components like settings and signals; it is a way for pipeline to access them and hook its functionality into Scrapy.
Parameters: crawler (Crawler` object) – crawler that uses this pipeline
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With