Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to combine scrapy and htmlunit to crawl urls with javascript

I'm working on Scrapy to crawl pages,however,I can't handle the pages with javascript. People suggest me to use htmlunit, so I got it installed,but I don't know how to use it at all.Dose anyone can give an example(scrapy + htmlunit) for me? Thanks very much.

like image 324
HjySix Avatar asked Nov 08 '11 08:11

HjySix


2 Answers

To handle the pages with javascript you can use Webkit or Selenium.

Here some snippets from snippets.scrapy.org:

Rendered/interactive javascript with gtk/webkit/jswebkit

Rendered Javascript Crawler With Scrapy and Selenium RC

like image 184
reclosedev Avatar answered Oct 11 '22 17:10

reclosedev


Here is a working example using selenium and phantomjs headless webdriver in a download handler middleware.

class JsDownload(object):

@check_spider_middleware
def process_request(self, request, spider):
    driver = webdriver.PhantomJS(executable_path='D:\phantomjs.exe')
    driver.get(request.url)
    return HtmlResponse(request.url, encoding='utf-8', body=driver.page_source.encode('utf-8'))

I wanted to ability to tell different spiders which middleware to use so I implemented this wrapper:

def check_spider_middleware(method):
@functools.wraps(method)
def wrapper(self, request, spider):
    msg = '%%s %s middleware step' % (self.__class__.__name__,)
    if self.__class__ in spider.middleware:
        spider.log(msg % 'executing', level=log.DEBUG)
        return method(self, request, spider)
    else:
        spider.log(msg % 'skipping', level=log.DEBUG)
        return None

return wrapper

settings.py:

DOWNLOADER_MIDDLEWARES = {'MyProj.middleware.MiddleWareModule.MiddleWareClass': 500}

for wrapper to work all spiders must have at minimum:

middleware = set([])

to include a middleware:

middleware = set([MyProj.middleware.ModuleName.ClassName])

The main advantage to implementing it this way rather than in the spider is that you only end up making one request. In the solution at reclosedev's second link for example: The download handler processes the request and then hands off the response to the spider. The spider then makes a brand new request in it's parse_page function -- That's two requests for the same content.

Another example: https://github.com/scrapinghub/scrapyjs

Cheers!

like image 28
rocktheartsm4l Avatar answered Oct 11 '22 16:10

rocktheartsm4l