Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Using arguments in scrapy pipeline on __init__

i have a scrapy pipelines.py and i want to get the given arguments. In my spider.py it works perfect:

class MySpider( CrawlSpider ):
    def __init__(self, host='', domain_id='', *args, **kwargs):

        super(MySpider, self).__init__(*args, **kwargs)
        print user_id
        ...

Now, i need the "user_id" in my pipelines.py to create the sqlite database like "domain-123.db". I search the whole web about my problem, but i cant find any solution.

Can someone help me?

PS: Yes, i try'ed the super() function within my pipelines Class like the spyer.py, it dont work.

like image 917
user3507915 Avatar asked Dec 16 '14 20:12

user3507915


2 Answers

Set the arguments inside the spider's constructor:

class MySpider(CrawlSpider):
    def __init__(self, user_id='', *args, **kwargs):
        self.user_id = user_id

        super(MySpider, self).__init__(*args, **kwargs) 

And read them in the open_spider() method of your pipeline:

def open_spider(self, spider):
    print spider.user_id
like image 57
alecxe Avatar answered Oct 05 '22 01:10

alecxe


I may be too late to provide a useful answer to op but for anybody reaching this question in the future (as I did), you should check the classmethods from_crawler and/or from_settings.

This way you can pass your arguments the way you want.

Check: https://doc.scrapy.org/en/latest/topics/item-pipeline.html#from_crawler

from_crawler(cls, crawler)

If present, this classmethod is called to create a pipeline instance from a Crawler. It must return a new instance of the pipeline. Crawler object provides access to all Scrapy core components like settings and signals; it is a way for pipeline to access them and hook its functionality into Scrapy.

Parameters: crawler (Crawler` object) – crawler that uses this pipeline

like image 23
Vls Avatar answered Oct 05 '22 01:10

Vls