I am using Google soy templates and have developed both Server-side and client-side rendering solutions. I want to benchmark them using performance tests. While benchmarking the client-side part, I want to wait till all the javascript actions are performed to calculate the actual response time.
I had tried below but it doesn't solve my purpose.
Is there are other framework that I can use to do both Load testing as well as page scraping?
Postman is an easy, lightweight, and user-friendly platform generally associated with testing and developing the API. Also, given you already have your collections created and updated daily, you can run basic performance tests in a couple of minutes.
Go to the “bin” folder and check phantomjs.exe file. If you are using it on a Windows OS, then you can set the path variable under the environment variable for fast access through command prompt. The command to run the PhantomJS program: C:\> phantomjs [options] file.
“CasperJS is an open source navigation scripting & testing utility written in Javascript for the PhantomJS WebKit headless browser and SlimerJS (Gecko). It eases the process of defining a full navigation scenario and provides useful high-level functions, methods & syntactic sugar for doing common tasks”
You can do that with PhantomJS (and SlimerJS): just create a new page
instance for each request. The below script is a full example. (Warning: it is quite verbose if your page requests lots of other resources.) The number on the left is milliseconds since the script started.
On my machine example.com points to localhost, and the bottleneck was Apache. E.g. When I run with N=30 it takes about 5 seconds to run. If I then run it immediately again, it took 0.75 seconds (because enough Apache instances had already been spun up). When I tried with N=100 it took about 12 seconds, and created huge load on my poor notebook.
This was enough to prove to me that the 6-connection limit of a browser was not being hit, and all 100 connections were genuinely running at the same time. If that is still not parallel enough for you, use a bash script to start, say, 8 instances of PhantomJS (assuming you have 8 cores). NOTE: All page instances are sharing a browser cache. So I see a single request for jQuery, for instance.
The exact same script runs on SlimerJS, but the behaviour is quite different. There is more overhead in starting each instance it seems, but more importantly each has its own disk cache. So my test time involved 30 requests to the Google CDN for JQuery!
(Asking if PhantomJS can be configured to not share cache, or if SlimerJS can, should probably be another StackOverflow question, as I don't know off-hand.)
/**
* This calls N instances of URL in parallel
*/
var url = "http://example.com/";
var N = 30;
var cnt = 0;
function onResourceReceived(response) {
console.log((Date.now() - startTime) + ':' + response.stage + ':' + response.url);
}
function onResourceRequested(requestData, networkRequest) {
console.log((Date.now() - startTime) + ':Request:' + requestData.url);
}
function onCompletion(status) {
++cnt;
console.log((Date.now() - startTime) + ':COMPLETE(' + cnt + '):' + status + ':' + this.url);
if (cnt >= N) phantom.exit();
}
var startTime = Date.now();
for (var i = 0; i < N; i++) {
var page = require('webpage').create();
page.onResourceReceived = onResourceReceived;
page.onResourceRequested = onResourceRequested;
page.open(url + "?i=" + i, onCompletion); //Append i to allow tracking
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With