I am currently writing tests for a medium sized library (~300 files). Many classes in this library share the same testing scheme which were coded using pytest:
File test_for_class_a.py:
import pytest
@pytest.fixture()
def setup_resource_1():
...
@pytest.fixture()
def setup_resource_2():
...
@pytest.fixture()
def setup_class_a(setup_resource_1, setup_resource_2):
...
def test_1_for_class_a(setup_class_a):
...
def test_2_for_class_a(setup_class_a):
...
similar files exist for class_b, class_c etc ... The only difference being the content of setup_resource_1 & setup_resource_2.
Now I would like to re-use the fixtures setup_class_a, setup_class_b, setup_class_c defined in test_for_class_a.py, test_for_class_b.py and test_for_class_c.py to run tests on them.
In a file test_all_class.py, this works but it is limited to one fixture per test:
from test_for_class_a import *
@pytest.mark.usefixtures('setup_class_a') # Fixture was defined in test_for_class_a.py
def test_some_things_on_class_a(request)
...
But I am looking for a way to perform something more general:
from test_for_class_a import *
from test_for_class_b import * # I can make sure I have no collision here
from test_for_class_c import * # I can make sure I have no collision here
==> @generate_test_for_fixture('setup_class_a', 'setup_class_b', 'setup_class_c')
def test_some_things_on_all_classes(request)
...
Is there any way to do something close to that? I have been looking at factories of factories and abstract pytest factories but I am struggling with the way pytest defines fixture. Is there any way to solve this problems?
One solution I found is to abuse the test cases as following:
from test_for_class_a import *
from test_for_class_b import *
from test_for_class_c import *
list_of_all_fixtures = []
# This will force pytest to generate all sub-fixture for class a
@pytest.mark.usefixtures(setup_class_a)
def test_register_class_a_fixtures(setup_class_a):
list_of_fixtures.append(setup_class_a)
# This will force pytest to generate all sub-fixture for class b
@pytest.mark.usefixtures(setup_class_b)
def test_register_class_b_fixtures(setup_class_b):
list_of_fixtures.append(setup_class_b)
# This will force pytest to generate all sub-fixture for class c
@pytest.mark.usefixtures(setup_class_c)
def test_register_class_b_fixtures(setup_class_c):
list_of_fixtures.append(setup_class_c)
# This is the real test to apply on all fixtures
def test_all_fixtures():
for my_fixture in list_of_all_fixtures:
# do something with my_fixture
This implicitly rely on the fact that all test_all_fixture is executed after all the test_register_class*. It is obviously quite dirty but it works...
We had same problem at work and I was hoping to write fixture just once for every case. So I wrote plugin pytest-data
which does that. Example:
@pytest.fixture
def resource(request):
resource_data = get_data(reqeust, 'resource_data', {'some': 'data', 'foo': 'foo'})
return Resource(resource_data)
@use_data(resource_data={'foo': 'bar'})
def test_1_for_class_a(resource):
...
@use_data(resource_data={'foo': 'baz'})
def test_2_for_class_a(resource):
...
What's great about it is that you write fixture just once with some defaults. When you just need that fixture/resource and you don't care about specific setup, you just use it. When you need in test some specific attribute, let's say to check out if that resource can handle also 100 character long value, you can pass it by use_data
decorator instead of writing another fixture.
With that you don't have to care about conflicts, because everything will be there just once. And then you can use conftest.py
for all of your fixtures without importing in test modules. For example we did separate deep module of all fixtures and all included in top conftest.py
.
Documentation of plugin pytest-data
: http://horejsek.github.io/python-pytest-data/
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With