Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

write pytest test function return value to file with pytest.hookimpl

I am looking for a way to access the return value of a test function in order to include that value in a test report file (similar to http://doc.pytest.org/en/latest/example/simple.html#post-process-test-reports-failures).

Code example that I would like to use:

# modified example code from http://doc.pytest.org/en/latest/example/simple.html#post-process-test-reports-failures

import pytest
import os.path

@pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
    # execute all other hooks to obtain the report object
    outcome = yield
    rep = outcome.get_result()

    if rep.when == "call" and rep.passed:
        mode = "a" if os.path.exists("return_values") else "w"
        with open("return_values.txt", mode) as f:
            # THE FOLLOWING LINE IS THE ONE I CANNOT FIGURE OUT
            # HOW DO I ACCESS THE TEST FUNCTION RETURN VALUE?
            return_value = item.return_value
            f.write(rep.nodeid + ' returned ' + str(return_value) + "\n")

I expect the return value to be written to the file "return_values.txt". Instead, I get an AttributeError.

Background (in case you can recommend a totally different approach):

I have a Python library for data analysis on a given problem. I have a standard set of test data which I routinely run my analysis to produce various "benchmark" metrics on the quality of the analysis algorithms on. For example, one such metric is the trace of a normalized confusion matrix produced by the analysis code (which I would like to be as close to 1 as possible). Another metric is the CPU time to produce an analysis result.

I am looking for a nice way to include these benchmark results into a CI framework (currently Jenkins), such that it becomes easy to see whether a commit improves or degrades the analysis performance. Since I am already running pytest in the CI sequence, and since I would like to use various features of pytest for my benchmarks (fixtures, marks, skipping, cleanup) I thought about simply adding a post-processing hook in pytest (see http://doc.pytest.org/en/latest/example/simple.html#post-process-test-reports-failures) that collects test function run time and return values and reports them (or only those which are marked as benchmarks) into a file, which will be collected and archived as a test artifact by my CI framework.

I am open to other ways to solve this problem, but my google search conclusion is that pytest is the framework closest to already providing what I need.

like image 863
Leonhard Neuhaus Avatar asked Jul 26 '19 09:07

Leonhard Neuhaus


Video Answer


1 Answers

Sharing the same problem, here is a different solution i came up with:

using the fixture record_property in the test:

def test_mytest(record_property):
    record_property("key", 42)

and then in conftest.py we can use the pytest_runtest_teardown hook:

#conftest.py
def pytest_runtest_teardown(item, nextitem):
    results = dict(item.user_properties)
    if not results:
        return
    with open(f'{item.name}_return_values.txt','a') as f:
        for key, value in results.items():
            f.write(f'{key} = {value}\n')

and then the content of test_mytest_return_values.txt:

key = 42

Two important notes:

  1. This code will be executed even if the test failed. I couldn't find a way to get the outcome of the test.
  2. This can be combined with heofling's answer using results = dict(item.user_properties) to obtain the keys and values that were added in the test instead of adding a dict to config and then access it in the test.
like image 100
Itamar Raviv Avatar answered Sep 16 '22 16:09

Itamar Raviv