We know that code coverage is a poor metric to use when gauging the quality of test code. We also know that testing the language/framework is a waste of time.
On the other hand, what metrics can we use to identify quality tests? Are there any best practices or rules of thumbs that you've learned to help you identify and write higher quality tests?
There are four main stages of testing that need to be completed before a program can be cleared for use: unit testing, integration testing, system testing, and acceptance testing.
A typical unit test contains 3 phases: First, it initializes a small piece of an application it wants to test (also known as the system under test, or SUT), then it applies some stimulus to the system under test (usually by calling a method on it), and finally, it observes the resulting behavior.
Make sure it's easy and quick to write tests. Then write lots of them.
I've found that it's very hard to predict in advance which tests will be the ones which end up failing either now, or a long way down the line. I tend to take a scatter-gun approach, trying to hit corner cases if I can think of them.
Also, don't be afraid of writing bigger tests which test a bunch of things together. Of course if that test fails it might take longer to figure out what went wrong, but often problems only arise once you start gluing things together.
write tests that verify the base functionality and the individual use-cases of the software's intent. Then write tests to check edge cases and verify expected exceptions.
in other words, write good unit tests from a customer perspective, and forget about metrics for test code. no metrics will tell you if your test code is good, only functioning software tells you when your test code is good.
I think Use case prove very useful to get the best test coverage. If you have your functionality in terms of use case it be easily converted into different test scenarios to cover positive , negative and exceptions. The use case also states the prerequisites and data prep if any for the same which proves very handy while writing test cases.
My rules of thumb:
I'd disagree that code coverage isn't a useful metric. If you don't have 100% code coverage, that at least indicates areas that need more tests.
In general, though - once you get adequate statement coverage, the next logical place to go is in writing tests that are either designed to directly verify the requirements that the code was written to meet, or that are intended to stress the edge-cases. Neither of these will fall naturally out of anything you can easily measure directly.
There are two good ways to verify test quality
With code review is possible to verify importants steps defined by @Patrick Cuff in his answer https://stackoverflow.com/a/197332/516167
Code review is systematic examination (often known as peer review) of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills.
The second is cheaper - this is automated job which measure test quality.
Mutation testing (or Mutation analysis or Program mutation) is used to design new software tests and evaluate the quality of existing software tests.
Related questions
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With