Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to manage a 'pool' of PhantomJS instances

I'm planning a webservice for my own use internally that takes one argument, a URL, and returns html representing the resolved DOM from that URL. By resolved I mean that the webservice will firstly get the page at that URL, then use PhantomJS to 'render' the page, and then return the resulting source after all DHTML, AJAX calls etc are executed. However launching phantom on a per-request basis (which I'm doing now) is way too sluggish. I would rather have a pool of PhantomJS instances with one always available to serve the latest call to my webservice.

Has any work been done on this kind of thing before? I'd rather base this webservice on the work of others than write a pool manager / http proxy server for myself from scratch.

More Context: I've listed the 2 similar projects that I've seen so far below and why I've avoided each one, resulting in this question about managing a pool of PhantomJS instances instead.

jsdom - from what I've seen it has great functionality for executing scripts on a page, but it doesn't attempt to replicate browser behaviour, so if I were use it as a general purpose "DOM resolver" there'd end up being a lot of extra coding to handle all kinds of edges cases, event calling, etc. The first example I saw was having to manually call the onload() function of the body tag for a test app I set up using node. It seemed like the beginning of a deep rabbit hole.

Selenium - It just has soo many more moving parts, so setting up a pool to manage long lived browser instances will just be more complicated than using PhantomJS. I don't need any of it's macro recording / scripting benefits. I just want a webservice that is as performant at getting a webpage and resolving it's DOM as if I were browsing to that URL with a browser (or even faster if I can make it ignore images etc.)

like image 937
Trindaz Avatar asked Apr 01 '12 01:04

Trindaz


People also ask

How does PhantomJS work?

PhantomJS is a discontinued headless browser used for automating web page interaction. PhantomJS provides a JavaScript API enabling automated navigation, screenshots, user behavior and assertions making it a common tool used to run browser-based unit tests in a headless system like a continuous integration environment.

How do I run PhantomJS script?

Go to the “bin” folder and check phantomjs.exe file. If you are using it on a Windows OS, then you can set the path variable under the environment variable for fast access through command prompt. The command to run the PhantomJS program: C:\> phantomjs [options] file.

What is PhantomJS driver?

PhantomJS is a headless Webkit, which has a number of uses. In this example, we'll be using it, in conjunction with Selenium WebDriver, for conducting basic system tests directly from the command line. Since PhantomJS eliminates the need for a graphical browser, tests run much faster.


4 Answers

I setup a PhantomJs Cloud Service, and it pretty much does what you are asking. It took me about 5 weeks of work implement.

The biggest problem you'll run into is the known-issue of memory leaks in PhantomJs. The way I worked around this is to cycle my instances every 50 calls.

The second biggest problem you'll run into is per-page processing is very cpu and memory intensive, so you'll only be able to run 4 or so instances per CPU.

The third biggest problem you'll run into is that PhantomJs is pretty wacky with page-finish events and redirects. You'll be informed that your page is finished rendering before it actually is. There are a number of ways to deal with this, but nothing 'standard' unfortunately.

The fourth biggest problem you'll have to deal with is interop between nodejs and phantomjs thankfully there are a lot of npm packages that deal with this issue to choose from.

So I know I'm biased (as I wrote the solution I'm going to suggest) but I suggest you check out PhantomJsCloud.com which is free for light usage.

Jan 2015 update: Another (5th?) big problem I ran into is how to send the request/response from the manager/load-balancer. Originally I was using PhantomJS's built-in HTTP server, but kept running into it's limitations, especially regarding maximum response-size. I ended up writing the request/response to the local file-system as the lines of communication. * Total time spent on implementation of the service represents perhaps 20 man-weeks issues is perhaps 1000 hours of work. * and FYI I am doing a complete rewrite for the next version.... (in-progress)

like image 110
JasonS Avatar answered Oct 14 '22 00:10

JasonS


The async JavaScript library works in Node and has a queue function that is quite handy for this kind of thing:

queue(worker, concurrency)

Creates a queue object with the specified concurrency. Tasks added to the queue will be processed in parallel (up to the concurrency limit). If all workers are in progress, the task is queued until one is available. Once a worker has completed a task, the task's callback is called.

Some pseudocode:

function getSourceViaPhantomJs(url, callback) {
  var resultingHtml = someMagicPhantomJsStuff(url);
  callback(null, resultingHtml);
}

var q = async.queue(function (task, callback) {
  // delegate to a function that should call callback when it's done
  // with (err, resultingHtml) as parameters
  getSourceViaPhantomJs(task.url, callback);
}, 5); // up to 5 PhantomJS calls at a time

app.get('/some/url', function(req, res) {
  q.push({url: params['url_to_scrape']}, function (err, results) {
    res.end(results);
  });
});

Check out the entire documentation for queue at the project's readme.

like image 44
Michelle Tilley Avatar answered Oct 14 '22 00:10

Michelle Tilley


For my master thesis, I developed the library phantomjs-pool which does exactly this. It allows to provide jobs which are then mapped to PhantomJS workers. The library handles the job distribution, communication, error handling, logging, restarting and some more stuff. The library was successfully used to crawl more than one million pages.

Example:

The following code executes a Google search for the numbers 0 to 9 and saves a screenshot of the page as googleX.png. Four websites are crawled in parallel (due to the creation of four workers). The script is started via node master.js.

master.js (runs in the Node.js environment)

var Pool = require('phantomjs-pool').Pool;

var pool = new Pool({ // create a pool
    numWorkers : 4,   // with 4 workers
    jobCallback : jobCallback,
    workerFile : __dirname + '/worker.js', // location of the worker file
    phantomjsBinary : __dirname + '/path/to/phantomjs_binary' // either provide the location of the binary or install phantomjs or phantomjs2 (via npm)
});
pool.start();

function jobCallback(job, worker, index) { // called to create a single job
    if (index < 10) { // index is count up for each job automatically
        job(index, function(err) { // create the job with index as data
            console.log('DONE: ' + index); // log that the job was done
        });
    } else {
        job(null); // no more jobs
    }
}

worker.js (runs in the PhantomJS environment)

var webpage = require('webpage');

module.exports = function(data, done, worker) { // data provided by the master
    var page = webpage.create();

    // search for the given data (which contains the index number) and save a screenshot
    page.open('https://www.google.com/search?q=' + data, function() {
        page.render('google' + data + '.png');
        done(); // signal that the job was executed
    });

};
like image 31
Thomas Dondorf Avatar answered Oct 14 '22 00:10

Thomas Dondorf


As an alternative to @JasonS great answer you can try PhearJS, which I built. PhearJS is a supervisor written in NodeJS for PhantomJS instances and provides an API via HTTP. It is available open-source from Github.

like image 40
TTT Avatar answered Oct 14 '22 00:10

TTT