Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Selenium Automation: What should be the acceptable range of failed test cases apart from the valid fails when running a test suite?

Our company is developing a framework with Selenium, POM, Maven and Java for a Web application and we have around 35 test cases. When we run testng.xml there are at least 4 to 5 test cases that fail randomly because of stale element exception or element not clickable at that point, etc.

Is it common for some test cases to fail when we run testng.xml? How many test cases are ran in your organization and what is the estimate on how many that fail?

like image 643
BretA Avatar asked Jan 09 '19 04:01

BretA


3 Answers

You just need to add some wait before your driver.findElement(). Selenium works very fast and that's why you are getting this Stale element or element not visible exceptions. Adding wait should solve the problem.

like image 135
Dish Avatar answered Oct 23 '22 09:10

Dish


Test Automation is related to the repeatability of the tests and the speed at which the tests can be executed. There are a number of commercial and open source tools available for assisting with the development of Test Automation and Selenium is possibly the most widely-used open source solution among them.

Acceptable range of failed tests

This matrix may vary from organization to organization or as per specific Client Requirements. However Exit Criteria must hold the key for this limit. Having said that, as Test Automation through Selenium automates the Regression Tests so ideally there should be Zero failures. I have known some organization adhering to Zero Defect policy.

Errors you are facing

  • StaleElementReferenceException: Indicates that a reference to an element is now "stale" --- the element no longer appears on the DOM of the page.
  • You can find detailed discussion about StaleElementReferenceException in:
    • StaleElementReference Exception in PageFactory
    • Selenium assertFalse fails with staleelementreferenceexception
    • Selenium Webdriver - Stale element exception when clicking on multiple dropdowns. DOM dint change
  • ElementClickInterceptedException: Indicates that a click could not be properly executed because the target element was obscured in some way.
  • You can find detailed discussion about ElementClickInterceptedException in:
    • Selenium Web Driver & Java. Element is not clickable at point (x, y). Other element would receive the click
    • Selenium WebDriver throws Exception in thread “main” org.openqa.selenium.ElementNotInteractableException
    • Selenium : How to solve org.openqa.selenium.InvalidElementStateException: invalid element state

Conclusion

So the errors which you mentioned are not errors as such but may arise while Test Execution due to the following reasons:

  • Mismatch in the compatibility between the version of binaries you are using.
  • Synchronization mismatch between the WebDriver instance and Web Browser instance.

These error can be addressed easily following the best practices mentioned above.

like image 40
undetected Selenium Avatar answered Oct 23 '22 10:10

undetected Selenium


Failures due to stale elements, element not clickable at a point, timing issues, etc. must be handled and dealt with in your automation framework - the methods you're creating and using to construct the cases.
They should not propagate and lead to case failures - they are tech issues, not a product problem, or test case one. As such they must be accounted for (try/catch blocks for example) and dealt with (retry mechanisms, or re-getting a web element) promptly.

In essence - treat these kinds of failures the same way as you treat syntax errorrs - there should not be such.


In the same time - and speaking simply out of my experience - cases dealing with live/dynamic data may sometimes randomly fail.

For instance, a SUT I'm working on displays some metrics and aggregations based on data and actions outside of my control (life traffic from upstream systems). There are cases checking a particular generated artifact behaves according to the set expecations (imagine a monthy graph for instance, which simply doesn't have a number of data points - there just was no activity on those days) - cases for it will fail, not because they were constructed incorrectly, and certainly not because there is a product bug - but because of the combination of the time of execution and the dataset.

Over time I've come to the conclusion having those failures is OK, getting them "fixed"- reselecting data sets, working around such outside fluctuations, etc. is an activity with diminishing value, and questionable ROI. Out of the current ~10,000 cases for this system, around 1.5% are failing because of this (disclaimer: the SUT is working exclusively with live/production data).
This is hardly a rule of thumb - it's just a number I've settled on as acceptable given the context.

And important note - if I had full control of this very data, I would have gotten rid of those "random" failures also. I've chosen to use the real data deliberately - thus my cases also verifying its integrity; with this negative side effect.

like image 39
Todor Minakov Avatar answered Oct 23 '22 10:10

Todor Minakov