Most test frameworks assume that "1 test = 1 Python method/function", and consider a test as passed when the function executes without raising assertions.
I'm testing a compiler-like program (a program that reads *.foo
files and process their contents), for which I want to execute the same test on many input (*.foo
) files. IOW, my test looks like:
class Test(unittest.TestCase):
def one_file(self, filename):
# do the actual test
def list_testcases(self):
# essentially os.listdir('tests/') and filter *.foo files.
def test_all(self):
for f in self.list_testcases():
one_file(f)
My current code uses
unittest from
Python's standard library, i.e. one_file
uses self.assert...(...)
statements to check whether the test passes.
This works, in the sense that I do get a program which succeeds/fails when my code is OK/buggy, but I'm loosing a lot of the advantages of the testing framework:
I don't get relevant reporting like "X failures out of Y tests" nor the list of passed/failed tests. (I'm planning to use such system not only to test my own development but also to grade student's code as a teacher, so reporting is important for me)
I don't get test independence. The second test runs on the environment left by the first, and so on. The first failure stops the testsuite: testcases coming after a failure are not ran at all.
I get the feeling that I'm abusing my test framework: there's only
one test function so automatic test discovery of unittest sounds
overkill for example. The same code could (should?) be written in
plain Python with a basic assert
.
An obvious alternative is to change my code to something like
class Test(unittest.TestCase):
def one_file(self, filename):
# do the actual test
def test_file1(self):
one_file("first-testcase.foo")
def test_file2(self):
one_file("second-testcase.foo")
Then I get all the advantages of unittest back, but:
It's a lot more code to write.
It's easy to "forget" a testcase, i.e. create a test file in
tests/
and forget to add it to the Python test.
I can imagine a solution where I would generate one method per testcase dynamically (along the lines of setattr(self, 'test_file' + str(n), ...)
), to generate the code for the second solution without having to write it by hand. But that sounds really overkill for a use-case which doesn't seem so complex.
How could I get the best of both, i.e.
automatic testcase discovery (list tests/*.foo
files), test
independence and proper reporting?
If you can use pytest as your test runner, then this is actually pretty straightforward using the parametrize
decorator:
import pytest, glob
all_files = glob.glob('some/path/*.foo')
@pytest.mark.parametrize('filename', all_files)
def test_one_file(filename):
# do the actual test
This will also automatically name the tests in a useful way, so that you can see which files have failed:
$ py.test
================================== test session starts ===================================
platform darwin -- Python 3.6.1, pytest-3.1.3, py-1.4.34, pluggy-0.4.0
[...]
======================================== FAILURES ========================================
_____________________________ test_one_file[some/path/a.foo] _____________________________
filename = 'some/path/a.foo'
@pytest.mark.parametrize('filename', all_files)
def test_one_file(filename):
> assert False
E assert False
test_it.py:7: AssertionError
_____________________________ test_one_file[some/path/b.foo] _____________________________
filename = 'some/path/b.foo'
@pytest.mark.parametrize('filename', all_files)
def test_one_file(filename):
[...]
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With