I currently have a project and tests similar to these.
class mylib:
@classmethod
def get_a(cls):
return 'a'
@classmethod
def convert_a_to_b(cls, a):
return 'b'
@classmethod
def works_with(cls, a, b):
return True
class TestMyStuff(object):
def test_first(self):
self.a = mylib.get_a()
def test_conversion(self):
self.b = mylib.convert_a_to_b(self.a)
def test_a_works_with_b(self):
assert mylib.works_with(self.a, self.b)
With py.test 0.9.2, these tests (or similar ones) pass. With later versions of py.test, test_conversion and test_a_works_with_b fail with 'TestMyStuff has no attribute a'.
I am guessing this is because with later builds of py.test, a separate instance of TestMyStuff is created for each method that is tested.
What is the proper way to write these tests such that results can be given for each of the steps in the sequence, but the state from a previous (successful) test can (must) be used to perform subsequent tests?
To overcome this, pytest provides us with an option to run tests in parallel. For this, we need to first install the pytest-xdist plugin. -n <num> runs the tests by using multiple workers, here it is 3. We will not be having much time difference when there is only a few tests to run.
conftest.py is where you setup test configurations and store the testcases that are used by test functions. The configurations and the testcases are called fixture in pytest.
Autouse fixtures (fixtures you don't have to request) Sometimes you may want to have a fixture (or even several) that you know all your tests will depend on. “Autouse” fixtures are a convenient way to make all tests automatically request them.
Pytest only caches one instance of a fixture at a time, which means that when using a parametrized fixture, pytest may invoke a fixture more than once in the given scope.
I partly agree with Ned in that it's good to avoid somewhat random sharing of test state. But i also think it is sometimes useful to accumulate state incrementally during tests.
With py.test you can actually do that by making it explicit that you want to share test state. Your example rewritten to work:
class State:
""" holding (incremental) test state """
def pytest_funcarg__state(request):
return request.cached_setup(
setup=lambda: State(),
scope="module"
)
class mylib:
@classmethod
def get_a(cls):
return 'a'
@classmethod
def convert_a_to_b(cls, a):
return 'b'
@classmethod
def works_with(cls, a, b):
return True
class TestMyStuff(object):
def test_first(self, state):
state.a = mylib.get_a()
def test_conversion(self, state):
state.b = mylib.convert_a_to_b(state.a)
def test_a_works_with_b(self, state):
mylib.works_with(state.a, state.b)
You can run this with recent py.test versions. Each functions receives a "state" object and the "funcarg" factory creates it initially and caches it over the module scope. Together with the py.test guarantee that tests are run in file order the test functions can be rather they will work incrementally on the test "state".
However, It is a bit fragile because if you select just the running of "test_conversion" via e.g. "py.test -k test_conversion" then your test will fail because the first test hasn't run. I think that some way to do incremental tests would be nice so maybe we can eventually find a totally robust solution.
HTH, holger
Good unit test practice is to avoid state accumulated across tests. Most unit test frameworks go to great lengths to prevent you from accumulating state. The reason is that you want each test to stand on its own. This lets you run arbitrary subsets of your tests, and ensures that your system is in a clean state for each test.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With