I'm using pytest.mark.parametrize
to feed increasingly long inputs into a rather slow test function, like so:
@pytest.mark.parametrize('data', [
b'ab',
b'xyz'*1000,
b'12345'*1024**2,
... # etc
])
def test_compression(data):
... # compress the data
... # decompress the data
assert decompressed_data == data
Because compressing large amounts of data takes a long time, I'd like to skip all the remaining tests after one fails. For example if the test fails with the input b'ab'
(the first one), b'xyz'*1000
and b'12345'*1024**2
and all other parametrizations should be skipped (or xfail without being executed).
I know it's possible to attach marks to individual parametrizations like so:
@pytest.mark.parametrize("test_input,expected", [
("3+5", 8),
("2+4", 6),
pytest.param("6*9", 42, marks=pytest.mark.xfail),
])
But I don't know how I could conditionally apply those marks depending on the status of the previous test case. Is there a way to do this?
The simplest way to skip a test is to mark it with the skip decorator which may be passed an optional reason . It is also possible to skip imperatively during test execution or setup by calling the pytest. skip(reason) function. This is useful when it is not possible to evaluate the skip condition during import time.
The skip() decorator can be used to skip over a test that need not be run at all.
Running pytest We can run a specific test file by giving its name as an argument. A specific function can be run by providing its name after the :: characters. Markers can be used to group tests. A marked grouped of tests is then run with pytest -m .
parametrize : parametrizing test functions. Parameter values are passed as-is to tests (no copy whatsoever). For example, if you pass a list or a dict as a parameter value, and the test case code mutates it, the mutations will be reflected in subsequent test case calls.
Marks are evaluated before the tests are executed, so there's no way to pass some kind of declarative mark (like skipif
) that depends on other tests results. You can apply custom test skipping logic in hooks, though. Modifying the incremental testing - test steps recipe from pytest
docs:
# conftest.py
import pytest
def pytest_sessionstart(session):
session.failednames = set()
def pytest_runtest_makereport(item, call):
if call.excinfo is not None:
item.session.failednames.add(item.originalname)
def pytest_runtest_setup(item):
if item.originalname in item.session.failednames:
pytest.skip("previous test failed (%s)" % item.name) # or use pytest.xfail like in the other answer
Example test
@pytest.mark.parametrize('i', range(10))
def test_spam(i):
assert i != 3
yields:
=================================== test session starts ===================================
collected 10 items
test_spam.py::test_spam[0] PASSED
test_spam.py::test_spam[1] PASSED
test_spam.py::test_spam[2] PASSED
test_spam.py::test_spam[3] FAILED
test_spam.py::test_spam[4] SKIPPED
test_spam.py::test_spam[5] SKIPPED
test_spam.py::test_spam[6] SKIPPED
test_spam.py::test_spam[7] SKIPPED
test_spam.py::test_spam[8] SKIPPED
test_spam.py::test_spam[9] SKIPPED
========================================= FAILURES ========================================
_______________________________________ test_spam[3] ______________________________________
i = 3
@pytest.mark.parametrize('i', range(10))
def test_spam(i):
> assert i != 3
E assert 3 != 3
test_spam.py:5: AssertionError
====================== 1 failed, 3 passed, 6 skipped in 0.06 seconds ======================
def pytest_runtest_makereport(item, call):
markers = {marker.name for marker in item.iter_markers()}
if call.excinfo is not None and 'skiprest' in markers:
item.session.failednames.add(item.originalname)
def pytest_runtest_setup(item):
markers = {marker.name for marker in item.iter_markers()}
if item.originalname in item.session.failednames and 'skiprest' in markers:
pytest.skip(item.name)
Usage:
@pytest.mark.skiprest
@pytest.mark.parametrize('somearg', ['a', 'b', 'c'])
def test_marked(somearg):
...
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With