Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is there a good way to test partial write failures to files?

Is there a good way to test partial write failures to files? I'm particularly interested in simulating a full disk.

I have some code which modifies a file. For some failures there's nothing the code can do, eg: if the disk is unplugged while writing. But for other predictable failures, such as disk full, my code should (and can) catch the exception and undo all changes since the most recent modification began.

I think my code does this well, but am struggling to find a way to exhaustively unit test it. It's difficult to write a unit test to limit a real file system1. I don't see any way to limit a BytesIO. I'm not aware of any mock packages for this.

Are there any standard tools/techniques for this before I write my own?


1 Limiting a real file system is hard for a few reasons. The biggest difficulty is that file systems are usually limited by blocks of a few KiB not bytes. It's hard to make this test all unhappy paths. That is, a good test would be repeated with limits of different lengths to ensuring every individual file.write(...) errors in test, but achieving this with block sizes of say 4KiB is going to be difficult.

like image 535
Philip Couling Avatar asked Aug 25 '20 09:08

Philip Couling


1 Answers

Disclaimer: I'm a contributor to pyfakefs.

This may be overkill for you, but you could simulate the whole file system using pyfakefs. This allows you to set the file system size beforehand. Here is a trivial example using pytest:

def test_disk_full(fs):  # fs is the file system fixture
    fs.set_disk_usage(100)  # sets the file system size in bytes
    os.makedirs('/foo')
    with open('/foo/bar.txt', 'w') as f:
        with pytest.raises(OSError):
            f.write('a' * 200)
            f.flush()
like image 93
MrBean Bremen Avatar answered Nov 18 '22 04:11

MrBean Bremen