Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to skip the rest of tests in the class if one has failed?

I'm creating the test cases for web-tests using Jenkins, Python, Selenium2(webdriver) and Py.test frameworks.

So far I'm organizing my tests in the following structure:

each Class is the Test Case and each test_ method is a Test Step.

This setup works GREAT when everything is working fine, however when one step crashes the rest of the "Test Steps" go crazy. I'm able to contain the failure inside the Class (Test Case) with the help of teardown_class(), however I'm looking into how to improve this.

What I need is somehow skip(or xfail) the rest of the test_ methods within one class if one of them has failed, so that the rest of the test cases are not run and marked as FAILED (since that would be false positive)

Thanks!

UPDATE: I'm not looking or the answer "it's bad practice" since calling it that way is very arguable. (each Test Class is independent - and that should be enough).

UPDATE 2: Putting "if" condition in each test method is not an option - is a LOT of repeated work. What I'm looking for is (maybe) somebody knows how to use the hooks to the class methods.

like image 505
Alex Okrushko Avatar asked Sep 13 '12 17:09

Alex Okrushko


People also ask

How do you skip the tests if the prior test case is failed?

In TestNG, @Test(enabled=false) annotation is used to skip a test case if it is not ready to test. We don't need to import any additional statements. And We can Skip a test by using TestNG Skip Exception if we want to Skip a particular Test.

How can we skip a test case conditionally?

Use throw new SkipException(String message) to skip a test. Conditional Skip – The user can have a conditional check. If the condition is met, it will throw SkipException and skip the rest of the code.

How do you skip failed test cases in TestNG?

Use the parameter enabled=false at @Test. By default, this parameter is set as True. Use throw new SkipException(String message) to skip a test.

What marker can be used to skip a test case?

The simplest way to skip a test function is to mark it with the skip decorator which may be passed an optional reason : @pytest.


1 Answers

I like the general "test-step" idea. I'd term it as "incremental" testing and it makes most sense in functional testing scenarios IMHO.

Here is a an implementation that doesn't depend on internal details of pytest (except for the official hook extensions). Copy this into your conftest.py:

import pytest  def pytest_runtest_makereport(item, call):     if "incremental" in item.keywords:         if call.excinfo is not None:             parent = item.parent             parent._previousfailed = item  def pytest_runtest_setup(item):     previousfailed = getattr(item.parent, "_previousfailed", None)     if previousfailed is not None:         pytest.xfail("previous test failed (%s)" % previousfailed.name) 

If you now have a "test_step.py" like this:

import pytest  @pytest.mark.incremental class TestUserHandling:     def test_login(self):         pass     def test_modification(self):         assert 0     def test_deletion(self):         pass 

then running it looks like this (using -rx to report on xfail reasons):

(1)hpk@t2:~/p/pytest/doc/en/example/teststep$ py.test -rx ============================= test session starts ============================== platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev17 plugins: xdist, bugzilla, cache, oejskit, cli, pep8, cov, timeout collected 3 items  test_step.py .Fx  =================================== FAILURES =================================== ______________________ TestUserHandling.test_modification ______________________  self = <test_step.TestUserHandling instance at 0x1e0d9e0>      def test_modification(self): >       assert 0 E       assert 0  test_step.py:8: AssertionError =========================== short test summary info ============================ XFAIL test_step.py::TestUserHandling::()::test_deletion   reason: previous test failed (test_modification) ================ 1 failed, 1 passed, 1 xfailed in 0.02 seconds ================= 

I am using "xfail" here because skips are rather for wrong environments or missing dependencies, wrong interpreter versions.

Edit: Note that neither your example nor my example would directly work with distributed testing. For this, the pytest-xdist plugin needs to grow a way to define groups/classes to be sent whole-sale to one testing slave instead of the current mode which usually sends test functions of a class to different slaves.

like image 128
hpk42 Avatar answered Sep 25 '22 03:09

hpk42