This question is about the extent to which it makes sense unit testing.
I’ve been writing a typical program that updates a database with the information coming in XML messages. I thought about the unit tests it needed. The program inserts or updates records according to complicated rules, which spawns many different cases. At first, I decided to test the following several conditions for each case:
It looked to me that the third type of test really made sense. But soon I found that this is not so easy to implement, because you actually need to kind of snapshot the database and then compare it with the modified one. I quickly started getting annoyed by the fact I needed to write such tests for different cases of database modification, while these tests were not much valuable and informative in sense of specification and design of the production code.
Then I thought, maybe, I’m testing too much? And if not, so if I test that the program does NOT modify irrelevant records, then why don’t I test that it:
I got completely confused where to draw the borderline. Where would you draw it?
UPDATE
I read many useful hints in the answers and marked one as the solution, because it has more useful ideas to me, but still it's not clear to me how to properly test database updates. Does it make sense testing that the program doen't change too much? And if so, then how thoroughly?
You draw the line at the point where tests stop being useful and no longer tell you something about your code.
Is it useful to know that your software doesn't send emails to Santa? No. Then don't test for that.
Knowing that the data access layer is doing the right thing is useful - that the right updates are happening.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With