I have some kind of test data and want to create a unit test for each item. My first idea was to do it like this:
import unittest
l = [["foo", "a", "a",], ["bar", "a", "b"], ["lee", "b", "b"]]
class TestSequence(unittest.TestCase):
def testsample(self):
for name, a,b in l:
print "test", name
self.assertEqual(a,b)
if __name__ == '__main__':
unittest.main()
The downside of this is that it handles all data in one test. I would like to generate one test for each item on the fly. Any suggestions?
This is called "parametrization".
There are several tools that support this approach. E.g.:
The resulting code looks like this:
from parameterized import parameterized
class TestSequence(unittest.TestCase):
@parameterized.expand([
["foo", "a", "a",],
["bar", "a", "b"],
["lee", "b", "b"],
])
def test_sequence(self, name, a, b):
self.assertEqual(a,b)
Which will generate the tests:
test_sequence_0_foo (__main__.TestSequence) ... ok
test_sequence_1_bar (__main__.TestSequence) ... FAIL
test_sequence_2_lee (__main__.TestSequence) ... ok
======================================================================
FAIL: test_sequence_1_bar (__main__.TestSequence)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/parameterized/parameterized.py", line 233, in <lambda>
standalone_func = lambda *a: func(*(a + p.args), **p.kwargs)
File "x.py", line 12, in test_sequence
self.assertEqual(a,b)
AssertionError: 'a' != 'b'
For historical reasons I'll leave the original answer circa 2008):
I use something like this:
import unittest
l = [["foo", "a", "a",], ["bar", "a", "b"], ["lee", "b", "b"]]
class TestSequense(unittest.TestCase):
pass
def test_generator(a, b):
def test(self):
self.assertEqual(a,b)
return test
if __name__ == '__main__':
for t in l:
test_name = 'test_%s' % t[0]
test = test_generator(t[1], t[2])
setattr(TestSequense, test_name, test)
unittest.main()
Using unittest (since 3.4)
Since Python 3.4, the standard library unittest
package has the subTest
context manager.
See the documentation:
Example:
from unittest import TestCase
param_list = [('a', 'a'), ('a', 'b'), ('b', 'b')]
class TestDemonstrateSubtest(TestCase):
def test_works_as_expected(self):
for p1, p2 in param_list:
with self.subTest():
self.assertEqual(p1, p2)
You can also specify a custom message and parameter values to subTest()
:
with self.subTest(msg="Checking if p1 equals p2", p1=p1, p2=p2):
Using nose
The nose testing framework supports this.
Example (the code below is the entire contents of the file containing the test):
param_list = [('a', 'a'), ('a', 'b'), ('b', 'b')]
def test_generator():
for params in param_list:
yield check_em, params[0], params[1]
def check_em(a, b):
assert a == b
The output of the nosetests command:
> nosetests -v
testgen.test_generator('a', 'a') ... ok
testgen.test_generator('a', 'b') ... FAIL
testgen.test_generator('b', 'b') ... ok
======================================================================
FAIL: testgen.test_generator('a', 'b')
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python2.5/site-packages/nose-0.10.1-py2.5.egg/nose/case.py", line 203, in runTest
self.test(*self.arg)
File "testgen.py", line 7, in check_em
assert a == b
AssertionError
----------------------------------------------------------------------
Ran 3 tests in 0.006s
FAILED (failures=1)
This can be solved elegantly using Metaclasses:
import unittest
l = [["foo", "a", "a",], ["bar", "a", "b"], ["lee", "b", "b"]]
class TestSequenceMeta(type):
def __new__(mcs, name, bases, dict):
def gen_test(a, b):
def test(self):
self.assertEqual(a, b)
return test
for tname, a, b in l:
test_name = "test_%s" % tname
dict[test_name] = gen_test(a,b)
return type.__new__(mcs, name, bases, dict)
class TestSequence(unittest.TestCase):
__metaclass__ = TestSequenceMeta
if __name__ == '__main__':
unittest.main()
As of Python 3.4, subtests have been introduced to unittest for this purpose. See the documentation for details. TestCase.subTest is a context manager which allows one to isolate asserts in a test so that a failure will be reported with parameter information, but it does not stop the test execution. Here's the example from the documentation:
class NumbersTest(unittest.TestCase):
def test_even(self):
"""
Test that numbers between 0 and 5 are all even.
"""
for i in range(0, 6):
with self.subTest(i=i):
self.assertEqual(i % 2, 0)
The output of a test run would be:
======================================================================
FAIL: test_even (__main__.NumbersTest) (i=1)
----------------------------------------------------------------------
Traceback (most recent call last):
File "subtests.py", line 32, in test_even
self.assertEqual(i % 2, 0)
AssertionError: 1 != 0
======================================================================
FAIL: test_even (__main__.NumbersTest) (i=3)
----------------------------------------------------------------------
Traceback (most recent call last):
File "subtests.py", line 32, in test_even
self.assertEqual(i % 2, 0)
AssertionError: 1 != 0
======================================================================
FAIL: test_even (__main__.NumbersTest) (i=5)
----------------------------------------------------------------------
Traceback (most recent call last):
File "subtests.py", line 32, in test_even
self.assertEqual(i % 2, 0)
AssertionError: 1 != 0
This is also part of unittest2, so it is available for earlier versions of Python.
load_tests is a little known mechanism introduced in 2.7 to dynamically create a TestSuite. With it, you can easily create parametrized tests.
For example:
import unittest
class GeneralTestCase(unittest.TestCase):
def __init__(self, methodName, param1=None, param2=None):
super(GeneralTestCase, self).__init__(methodName)
self.param1 = param1
self.param2 = param2
def runTest(self):
pass # Test that depends on param 1 and 2.
def load_tests(loader, tests, pattern):
test_cases = unittest.TestSuite()
for p1, p2 in [(1, 2), (3, 4)]:
test_cases.addTest(GeneralTestCase('runTest', p1, p2))
return test_cases
That code will run all the TestCases in the TestSuite returned by load_tests. No other tests are automatically run by the discovery mechanism.
Alternatively, you can also use inheritance as shown in this ticket: http://bugs.python.org/msg151444
It can be done by using pytest. Just write the file test_me.py
with content:
import pytest
@pytest.mark.parametrize('name, left, right', [['foo', 'a', 'a'],
['bar', 'a', 'b'],
['baz', 'b', 'b']])
def test_me(name, left, right):
assert left == right, name
And run your test with command py.test --tb=short test_me.py
. Then the output will look like:
=========================== test session starts ============================
platform darwin -- Python 2.7.6 -- py-1.4.23 -- pytest-2.6.1
collected 3 items
test_me.py .F.
================================= FAILURES =================================
_____________________________ test_me[bar-a-b] _____________________________
test_me.py:8: in test_me
assert left == right, name
E AssertionError: bar
==================== 1 failed, 2 passed in 0.01 seconds ====================
It is simple! Also pytest has more features like fixtures
, mark
, assert
, etc.
Use the ddt library. It adds simple decorators for the test methods:
import unittest
from ddt import ddt, data
from mycode import larger_than_two
@ddt
class FooTestCase(unittest.TestCase):
@data(3, 4, 12, 23)
def test_larger_than_two(self, value):
self.assertTrue(larger_than_two(value))
@data(1, -3, 2, 0)
def test_not_larger_than_two(self, value):
self.assertFalse(larger_than_two(value))
This library can be installed with pip
. It doesn't require nose
, and works excellent with the standard library unittest
module.
You would benefit from trying the TestScenarios library.
testscenarios provides clean dependency injection for python unittest style tests. This can be used for interface testing (testing many implementations via a single test suite) or for classic dependency injection (provide tests with dependencies externally to the test code itself, allowing easy testing in different situations).
You can use the nose-ittr plugin (pip install nose-ittr
).
It's very easy to integrate with existing tests, and minimal changes (if any) are required. It also supports the nose multiprocessing plugin.
Note that you can also have a customize setup
function per test.
@ittr(number=[1, 2, 3, 4])
def test_even(self):
assert_equal(self.number % 2, 0)
It is also possible to pass nosetest
parameters like with their built-in plugin attrib
. This way you can run only a specific test with specific parameter:
nosetest -a number=2
I use metaclasses and decorators for generate tests. You can check my implementation python_wrap_cases. This library doesn't require any test frameworks.
Your example:
import unittest
from python_wrap_cases import wrap_case
@wrap_case
class TestSequence(unittest.TestCase):
@wrap_case("foo", "a", "a")
@wrap_case("bar", "a", "b")
@wrap_case("lee", "b", "b")
def testsample(self, name, a, b):
print "test", name
self.assertEqual(a, b)
Console output:
testsample_u'bar'_u'a'_u'b' (tests.example.test_stackoverflow.TestSequence) ... test bar
FAIL
testsample_u'foo'_u'a'_u'a' (tests.example.test_stackoverflow.TestSequence) ... test foo
ok
testsample_u'lee'_u'b'_u'b' (tests.example.test_stackoverflow.TestSequence) ... test lee
ok
Also you may use generators. For example this code generate all possible combinations of tests with arguments a__list
and b__list
import unittest
from python_wrap_cases import wrap_case
@wrap_case
class TestSequence(unittest.TestCase):
@wrap_case(a__list=["a", "b"], b__list=["a", "b"])
def testsample(self, a, b):
self.assertEqual(a, b)
Console output:
testsample_a(u'a')_b(u'a') (tests.example.test_stackoverflow.TestSequence) ... ok
testsample_a(u'a')_b(u'b') (tests.example.test_stackoverflow.TestSequence) ... FAIL
testsample_a(u'b')_b(u'a') (tests.example.test_stackoverflow.TestSequence) ... FAIL
testsample_a(u'b')_b(u'b') (tests.example.test_stackoverflow.TestSequence) ... ok
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With