I am testing a function that, as part of its execution, pickles objects. After the test, I want to delete the pickle files.
If it is the test itself that saves files, pytest
’s “tmpdir” fixture seems like the solution. However, with the function undergoing testing being the creator of saved files, and not the test, I’m not sure what the proper way to clean up the files after the test is.
In this case, the files are being saved in the “tests” directory that contains the tests being run. The only option I can think of is to delete all *.pkl pickle files from the test directory after each test. I am wondering if I am missing a more elegant solution that pytest may provide.
What is the standard way of cleaning up any files that are generated as a side effect of a function being tested with pytest
?
Pytest only caches one instance of a fixture at a time, which means that when using a parametrized fixture, pytest may invoke a fixture more than once in the given scope.
One that is executed to mop-up side effects after a test is run is called a teardown function. By giving our setup and teardown functions special names pytest will ensure that they are run before and after our test function regardless of what happens in the test function.
Pytest markers There are some inbuilt markers like xfail , skip and so on. The skip is one such marker provided by pytest that is used to skip test functions from executing. The syntax to use the skip mark is as follows: @pytest. mark. skip(reason="reason for skipping the test case")
conftest.py is where you setup test configurations and store the testcases that are used by test functions. The configurations and the testcases are called fixture in pytest. The test_*. py files are where the actual test functions reside.
You can monkeypatch file opening function and check whether a new file is written. Collect new files in a list. Afterwards, go through the list and remove the files. Example:
# spam.py
import pathlib
import numpy
def plain_write():
with open('spam.1', 'w') as f:
f.write('eggs')
def pathlib_write():
with pathlib.Path('spam.2').open('w') as f:
f.write('eggs')
def pathlib_write_text():
pathlib.Path('spam.3').write_text('eggs')
def pathlib_write_bytes():
pathlib.Path('spam.3').write_bytes(b'eggs')
def numpy_save():
numpy.save('spam.4', numpy.zeros([10, 10]))
def numpy_savetxt():
numpy.savetxt('spam.5', numpy.zeros([10, 10]))
Depending on what functions you test, monkeypatching builtins.open
may not be enough: for example, to cleanup files written with pathlib
, you need to additionally monkeypatch io.open
.
import builtins
import io
import os
import pytest
import spam
def patch_open(open_func, files):
def open_patched(path, mode='r', buffering=-1, encoding=None,
errors=None, newline=None, closefd=True,
opener=None):
if 'w' in mode and not os.path.isfile(path):
files.append(path)
return open_func(path, mode=mode, buffering=buffering,
encoding=encoding, errors=errors,
newline=newline, closefd=closefd,
opener=opener)
return open_patched
@pytest.fixture(autouse=True)
def cleanup_files(monkeypatch):
files = []
monkeypatch.setattr(builtins, 'open', patch_open(builtins.open, files))
monkeypatch.setattr(io, 'open', patch_open(io.open, files))
yield
for file in files:
os.remove(file)
def test_plain_write():
assert spam.plain_write() is None
def test_pathlib_write():
assert spam.pathlib_write() is None
def test_pathlib_write_text():
assert spam.pathlib_write_text() is None
def test_pathlib_write_bytes():
assert spam.pathlib_write_bytes() is None
def test_numpy_save():
assert spam.numpy_save() is None
def test_numpy_savetxt():
assert spam.numpy_savetxt() is None
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With