Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Writing Quality Tests

Tags:

testing

We know that code coverage is a poor metric to use when gauging the quality of test code. We also know that testing the language/framework is a waste of time.

On the other hand, what metrics can we use to identify quality tests? Are there any best practices or rules of thumbs that you've learned to help you identify and write higher quality tests?

like image 725
Owen Avatar asked Oct 12 '08 19:10

Owen


People also ask

What are the four 4 types of systems tests?

There are four main stages of testing that need to be completed before a program can be cleared for use: unit testing, integration testing, system testing, and acceptance testing.

How do you start writing unit tests?

A typical unit test contains 3 phases: First, it initializes a small piece of an application it wants to test (also known as the system under test, or SUT), then it applies some stimulus to the system under test (usually by calling a method on it), and finally, it observes the resulting behavior.


7 Answers

  1. Make sure your tests are independent of each other. A test shouldn't depend on the execution or results of some other test.
  2. Make sure each test has clearly defined entry criteria, test steps and exit criteria.
  3. Set up a Requirements Verification Traceability Matrix (RVTM). Each test should verify one or more requirement. Also, each requirement should be verified by at least one test.
  4. Make sure your tests are identifiable. Establish a simple naming or labeling convention and stick to it. Reference the test indentifier when logging defects.
  5. Treat your tests like you treat your code. Have a testware development process that mirrors your software development process. Tests should have peer reviews, be under version control, have change control procedures, etc.
  6. Categorize and organize your tests. Make it easy to find and run a test, or suite of tests, as needed.
  7. Make your tests as succinct as possible. This makes them easier to run, and automate. It's better to run lots of little tests than one large test.
  8. When a test fails, make it easy to see why the test failed.
like image 172
Patrick Cuff Avatar answered Oct 01 '22 15:10

Patrick Cuff


Make sure it's easy and quick to write tests. Then write lots of them.

I've found that it's very hard to predict in advance which tests will be the ones which end up failing either now, or a long way down the line. I tend to take a scatter-gun approach, trying to hit corner cases if I can think of them.

Also, don't be afraid of writing bigger tests which test a bunch of things together. Of course if that test fails it might take longer to figure out what went wrong, but often problems only arise once you start gluing things together.

like image 42
Chris Jefferson Avatar answered Oct 01 '22 14:10

Chris Jefferson


write tests that verify the base functionality and the individual use-cases of the software's intent. Then write tests to check edge cases and verify expected exceptions.

in other words, write good unit tests from a customer perspective, and forget about metrics for test code. no metrics will tell you if your test code is good, only functioning software tells you when your test code is good.

like image 40
Steven A. Lowe Avatar answered Oct 01 '22 15:10

Steven A. Lowe


I think Use case prove very useful to get the best test coverage. If you have your functionality in terms of use case it be easily converted into different test scenarios to cover positive , negative and exceptions. The use case also states the prerequisites and data prep if any for the same which proves very handy while writing test cases.

like image 32
Chanakya Avatar answered Oct 01 '22 16:10

Chanakya


My rules of thumb:

  1. Cover even simpler test cases in your test plan (don't risk leaving the most used functionality untested)
  2. Trace the corresponding requirement near each test case
  3. As Joel says, have a separate team that does testing
like image 22
friol Avatar answered Oct 01 '22 14:10

friol


I'd disagree that code coverage isn't a useful metric. If you don't have 100% code coverage, that at least indicates areas that need more tests.

In general, though - once you get adequate statement coverage, the next logical place to go is in writing tests that are either designed to directly verify the requirements that the code was written to meet, or that are intended to stress the edge-cases. Neither of these will fall naturally out of anything you can easily measure directly.

like image 31
Mark Bessey Avatar answered Oct 01 '22 14:10

Mark Bessey


There are two good ways to verify test quality

1. Code review

With code review is possible to verify importants steps defined by @Patrick Cuff in his answer https://stackoverflow.com/a/197332/516167

Code review is systematic examination (often known as peer review) of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills.

2. Mutation testing

The second is cheaper - this is automated job which measure test quality.

Mutation testing (or Mutation analysis or Program mutation) is used to design new software tests and evaluate the quality of existing software tests.

Related questions

  • How to ensure quality of junit tests?
like image 25
MariuszS Avatar answered Oct 01 '22 14:10

MariuszS