Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to run script as pytest test

Suppose I have a test expressed as a simple script with assert-statements (see background for why), e.g

import foo
assert foo(3) == 4

How would I include this script in my pytest test suite -- in a nice way?

I have tried two working but less-than-nice approaches:

One approach is to name the script like a test, but this makes the whole pytest discovery fail when the test fails.

My current approach is to import the script from within a test function:

def test_notebooks():
    notebook_folder = Path(__file__).parent / 'notebooks'
    for notebook in notebook_folder.glob('*.py'):
        import_module(f'{notebook_folder.name}.{notebook.stem}')

This works, but the scripts are not reported individually and test failures have a long and winding stack trace:

__________________________________________________ test_notebooks ___________________________________________________

    def test_notebooks():
        notebook_folder = Path(__file__).parent / 'notebooks'
        for notebook in notebook_folder.glob('*.py'):
>           import_module(f'{notebook_folder.name}.{notebook.stem}')

test_notebooks.py:7:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
envs\anaconda\lib\importlib\__init__.py:127: in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
<frozen importlib._bootstrap>:1006: in _gcd_import
... (9 lines removed)...
<frozen importlib._bootstrap>:219: in _call_with_frames_removed
    ???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

>   assert False
E   AssertionError

notebooks\notebook_2.py:1: AssertionError

Background

The reason I have test in script files is that they are really Jupyter notebooks saved as .py-files with markup by the excellent jupytext plugin.

These notebooks are converted to html for documentation, can be used interactively for learning the system, and serve as cheap functional tests.

like image 409
Janus Avatar asked Jun 28 '19 13:06

Janus


People also ask

Is pytest a test runner?

Once the pytest package is installed, PyCharm detects it and makes it the default project test runner.

Can you run pytest in Jupyter notebook?

nose. tools offers functions for testing assertions, e.g. assert_equal() . These functions are callable in a Jupyter notebook (REPL) and produce a detailed output if an error is raised.


1 Answers

Executing scripts in a test function

There's nothing wrong with calling the scripts from a test function, so your approach is perfectly fine. However, I would use parametrization instead of running the scripts in a for loop; this way you get the test executed nicely once per script. If you don't like the long tracebacks, you can cut them in a custom pytest_exception_interact hookimpl. Example:

# conftest.py

def pytest_exception_interact(node, call, report):
    excinfo = call.excinfo
    if 'script' in node.funcargs:
        excinfo.traceback = excinfo.traceback.cut(path=node.funcargs['script'])
    report.longrepr = node.repr_failure(excinfo)

Parametrized tests:

# test_spam.py

import pathlib
import runpy
import pytest

scripts = pathlib.Path(__file__, '..', 'scripts').resolve().glob('*.py')


@pytest.mark.parametrize('script', scripts)
def test_script_execution(script):
    runpy.run_path(script)

Test execution yields (for testing, I have created simple scripts with single lines like assert False or 1 / 0:

$ pytest -v
======================================= test session starts ========================================
platform linux -- Python 3.6.8, pytest-4.6.3, py-1.8.0, pluggy-0.12.0 -- /home/hoefling/projects/.venvs/stackoverflow/bin/python3.6
cachedir: .pytest_cache
rootdir: /home/hoefling/projects/private/stackoverflow/so-56807698
plugins: mock-1.10.4, cov-2.7.1, forked-1.0.2, xdist-1.28.0, django-3.4.8
collected 3 items                                                                                  

test_spam.py::test_script_execution[script0] PASSED
test_spam.py::test_script_execution[script1] FAILED
test_spam.py::test_script_execution[script2] FAILED

============================================= FAILURES =============================================
____________________________________ test_script_runpy[script1] ____________________________________

>   assert False
E   AssertionError

scripts/script_3.py:1: AssertionError
____________________________________ test_script_runpy[script2] ____________________________________

>   1 / 0
E   ZeroDivisionError: division by zero

scripts/script_2.py:1: ZeroDivisionError
================================ 2 failed, 1 passed in 0.07 seconds ================================

Custom test protocol

If you don't like the above solution, another thing I can think of is to implement your own test collection & execution protocol. Example:

# conftest.py

import pathlib
import runpy
import pytest


def pytest_collect_file(parent, path):
    p = pathlib.Path(str(path))
    if p.suffix == '.py' and p.parent.name == 'scripts':
        return Script(path, parent)


class Script(pytest.File):
    def collect(self):
        yield ScriptItem(self.name, self)


class ScriptItem(pytest.Item):
    def runtest(self):
        runpy.run_path(self.fspath)

    def repr_failure(self, excinfo):
        excinfo.traceback = excinfo.traceback.cut(path=self.fspath)
        return super().repr_failure(excinfo)

This will collect every .py file in scripts directory, wrap each script in a test case and invoke runpy on test execution. The execution log will look pretty much the same, just the tests named differently.

like image 74
hoefling Avatar answered Oct 13 '22 06:10

hoefling