We are using protractor
for testing internal AngularJS applications.
Besides functional tests, we check for performance regressions with the help of protractor-perf
which is based on nodejs browser-perf
library. Because, "Performance is a feature".
With protractor-perf
we can measure and assert different performance characteristics while making browser actions, for example:
browser.get('http://www.angularjs.org');
perf.start(); // Start measuring the metrics
element(by.model('todoText')).sendKeys('write a protractor test');
element(by.css('[value="add"]')).click();
perf.stop(); // Stop measuring the metrics
if (perf.isEnabled) { // Is perf measuring enabled ?
// Check for perf regressions, just like you check for functional regressions
expect(perf.getStats('meanFrameTime')).toBeLessThan(60);
};
Now, for an another internal application we have a set of selenium-based tests written in Python.
Is it possible to check for performance regressions with selenium-python, or should I rewrite the tests using protractor
to be able to write browser performance tests?
Performance testing using Selenium and WebDriver is generally not advised. Not because it is incapable, but because it is not optimised for the job and you are unlikely to get good results.
Selenium is a tool for automating testing across many web browsers. Selenium WebDriver supports a variety of browsers, including Google Chrome, Mozilla Firefox, Safari, and Internet Explorer, and allows you to simply automate browser testing across different browsers.
There is a possibility to get closer to what browser-perf
is doing by collecting the chrome performance logs and analyzing them.
To get performance logs, turn on performance
logs by tweaking loggingPrefs
desired capability:
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
caps = DesiredCapabilities.CHROME
caps['loggingPrefs'] = {'performance': 'ALL'}
driver = webdriver.Chrome(desired_capabilities=caps)
driver.get('https://stackoverflow.com')
logs = [json.loads(log['message'])['message'] for log in driver.get_log('performance')]
with open('devtools.json', 'wb') as f:
json.dump(logs, f)
driver.close()
At this point, devtools.json
file would contain a bunch of trace records:
[
{
"params": {
"timestamp": 1419571233.19293,
"frameId": "16639.1",
"requestId": "16639.1",
"loaderId": "16639.2",
"type": "Document",
"response": {
"mimeType": "text/plain",
"status": 200,
"fromServiceWorker": false,
"encodedDataLength": -1,
"headers": {
"Access-Control-Allow-Origin": "*",
"Content-Type": "text/plain;charset=US-ASCII"
},
"url": "data:,",
"statusText": "OK",
"connectionId": 0,
"connectionReused": false,
"fromDiskCache": false
}
},
"method": "Network.responseReceived"
},
{
"params": {
"timestamp": 1419571233.19294,
"encodedDataLength": 0,
"requestId": "16639.1"
},
"method": "Network.loadingFinished"
},
..
]
Now, the question is, what to do with it.
One option that was initially suggested during the Google Test Automation Conference is to submit the logs to webpagetest.org. There is an example in java available here, but, at the moment, I had no luck implementing it in Python.
In theory, the UI report generated by webpagetest.org would look like this:
They also provide the metrics in JSON/XML and other formats that can be further analyzed.
This is really something, thanks to Vivek Singh for the pointing comment.
browser-perf also uses the logging functionality to pick up the tracing logs, and analyzes the data.
It is possible to do performance regression testing with Selenium. However as you might have already noted. The core essence of Selenium is that it mimics user behavior. This means that Selenium will only perform the action (e.g. clicking on the button) if the user is able to perform the same action. Also taking into account certain code, workarounds (i.e. hard waits, various checks and custom code), required to even be able to run the Selenium script. This means that the "definition" of performance testing using Selenium will be slightly different compared to traditional performance testing.
What you will want to do is have a timer (start/stop) for each action Selenium is performing. For example: Clicking on a button and log this to a file for later use.
Using Selenium you can create a performance baseline and from there on onwards compare each consecutive result with the baseline. This will give you statistics that you can then use for further analysis.
Selenium nor Webdriver (Selenium 2.0) come with this feature out of the box. So some custom coding needs to happen for this to work.
Performance testing through Selenium is not advised because it is not optimised for the job. It has been listed as one of the worst practices by the Selenium team:
It may seem ideal to performance test in the context of the user but a suite of WebDriver tests are subjected to many points of external and internal fragility which are beyond your control; for example browser startup speed, speed of HTTP servers, response of third party servers that host JavaScript or CSS, and the instrumentation penalty of the WebDriver implementation itself. Variation at these points will cause variation in your results. It is difficult to separate the difference between the performance of your website and the performance of external resources, and it is also hard to tell what the performance penalty is for using WebDriver in the browser, especially if you are injecting scripts.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With