I am looking for a way to run all of the assertions in my unit tests in PyTest, even if some of them fail. I know there must be a simple way to do this. I checked the CLI options and looked through this site for similar questions/answers but didn't see anything. Sorry if this has already been answered.
For example, consider the following code snippet, with PyTest code alongside it:
def parrot(i): return i def test_parrot(): assert parrot(0) == 0 assert parrot(1) == 1 assert parrot(2) == 1 assert parrot(2) == 2
By default, the execution stops at the first failure:
$ python -m pytest fail_me.py =================== test session starts =================== platform linux2 -- Python 2.7.10, pytest-2.9.1, py-1.4.31, pluggy-0.3.1 rootdir: /home/npsrt/Documents/repo/codewars, inifile: collected 1 items fail_me.py F =================== FAILURES =================== ___________________ test_parrot ___________________ def test_parrot(): assert parrot(0) == 0 assert parrot(1) == 1 > assert parrot(2) == 1 E assert 2 == 1 E + where 2 = parrot(2) fail_me.py:7: AssertionError =================== 1 failed in 0.05 seconds ===================
What I'd like to do is to have the code continue to execute even after PyTest encounters the first failure.
They don't throw an exception when an assert fails. The execution will continue with the next step after the assert statement. If you need/want to throw an exception (if such occurs) then you need to use assertAll() method as a last statement in the @Test and test suite again continue with next @Test as it is.
When an “assert” command fails, the test execution will be aborted.
As others already mentioned, you'd ideally write multiple tests and only have one assertion in each (that's not a hard limit, but a good guideline).
The @pytest.mark.parametrize
decorator makes this easy:
import pytest def parrot(i): return i @pytest.mark.parametrize('inp, expected', [(0, 0), (1, 1), (2, 1), (2, 2)]) def test_parrot(inp, expected): assert parrot(inp) == expected
When running it with -v
:
parrot.py::test_parrot[0-0] PASSED parrot.py::test_parrot[1-1] PASSED parrot.py::test_parrot[2-1] FAILED parrot.py::test_parrot[2-2] PASSED =================================== FAILURES =================================== _______________________________ test_parrot[2-1] _______________________________ inp = 2, expected = 1 @pytest.mark.parametrize('inp, expected', [(0, 0), (1, 1), (2, 1), (2, 2)]) def test_parrot(inp, expected): > assert parrot(inp) == expected E assert 2 == 1 E + where 2 = parrot(2) parrot.py:8: AssertionError ====================== 1 failed, 3 passed in 0.01 seconds ======================
It ran all of your tests. You only wrote one test, and that test ran!
If you want nonfatal assertions, where a test will keep going if an assertion fails (like Google Test's EXPECT macros), try pytest-expect, which provides that functionality. Here's the example their site gives:
def test_func(expect): expect('a' == 'b') expect(1 != 1) a = 1 b = 2 expect(a == b, 'a:%s b:%s' % (a,b))
You can see that expectation failures don't stop the test, and all failed expectations get reported:
$ python -m pytest test_expect.py ================ test session starts ================= platform darwin -- Python 2.7.9 -- py-1.4.26 -- pytest-2.7.0 rootdir: /Users/okken/example, inifile: plugins: expect collected 1 items test_expect.py F ====================== FAILURES ====================== _____________________ test_func ______________________ > expect('a' == 'b') test_expect.py:2 -------- > expect(1 != 1) test_expect.py:3 -------- > expect(a == b, 'a:%s b:%s' % (a,b)) a:1 b:2 test_expect.py:6 -------- Failed Expectations:3 ============== 1 failed in 0.01 seconds ==============
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With