Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Creating a generic scrapy spider

My question is really how to do the same thing as a previous question, but in Scrapy 0.14.

Using one Scrapy spider for several websites

Basically, I have GUI that takes parameters like domain, keywords, tag names, etc. and I want to create a generic spider to crawl those domains for those keywords in those tags. I've read conflicting things, using older versions of scrapy, by either overriding the spider manager class or by dynamically creating a spider. Which method is preferred and how do I implement and invoke the proper solution? Thanks in advance.

Here is the code that I want to make generic. It also uses BeautifulSoup. I paired it down so hopefully didn't remove anything crucial to understand it.

class MySpider(CrawlSpider):

name = 'MySpider'
allowed_domains = ['somedomain.com', 'sub.somedomain.com']
start_urls = ['http://www.somedomain.com']

rules = (
    Rule(SgmlLinkExtractor(allow=('/pages/', ), deny=('', ))),

    Rule(SgmlLinkExtractor(allow=('/2012/03/')), callback='parse_item'),
)

def parse_item(self, response):
    contentTags = []

    soup = BeautifulSoup(response.body)

    contentTags = soup.findAll('p', itemprop="myProp")

    for contentTag in contentTags:
        matchedResult = re.search('Keyword1|Keyword2', contentTag.text)
        if matchedResult:
            print('URL Found: ' + response.url)

    pass
like image 266
user1284717 Avatar asked Mar 22 '12 00:03

user1284717


1 Answers

You could create a run-time spider which is evaluated by the interpreter. This code piece could be evaluated at runtime like so:

a = open("test.py")
from compiler import compile
d = compile(a.read(), 'spider.py', 'exec')
eval(d)

MySpider
<class '__main__.MySpider'>
print MySpider.start_urls
['http://www.somedomain.com']
like image 173
Supreet Sethi Avatar answered Oct 10 '22 14:10

Supreet Sethi