Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do you handle unit/regression tests which are expected to fail during development?

During software development, there may be bugs in the codebase which are known issues. These bugs will cause the regression/unit tests to fail, if the tests have been written well.

There is constant debate in our teams about how failing tests should be managed:

  1. Comment out failing test cases with a REVISIT or TODO comment.

    • Advantage: We will always know when a new defect has been introduced, and not one we are already aware of.
    • Disadvantage: May forget to REVISIT the commented-out test case, meaning that the defect could slip through the cracks.
  2. Leave the test cases failing.

    • Advantage: Will not forget to fix the defects, as the script failures will constantly reminding you that a defect is present.
    • Disadvantage: Difficult to detect when a new defect is introduced, due to failure noise.

I'd like to explore what the best practices are in this regard. Personally, I think a tri-state solution is the best for determining whether a script is passing. For example when you run a script, you could see the following:

  • Percentage passed: 75%
  • Percentage failed (expected): 20%
  • Percentage failed (unexpected): 5%

You would basically mark any test cases which you expect to fail (due to some defect) with some metadata. This ensures you still see the failure result at the end of the test, but immediately know if there is a new failure which you weren't expecting. This appears to take the best parts of the 2 proposals above.

Does anyone have any best practices for managing this?

like image 831
LeopardSkinPillBoxHat Avatar asked Oct 01 '08 01:10

LeopardSkinPillBoxHat


1 Answers

I would leave your test cases in. In my experience, commenting out code with something like

// TODO:  fix test case

is akin to doing:

// HAHA: you'll never revisit me

In all seriousness, as you get closer to shipping, the desire to revisit TODO's in code tends to fade, especially with things like unit tests because you are concentrating on fixing other parts of the code.

Leave the tests in perhaps with your "tri-state" solution. Howeveer, I would strongly encourage fixing those cases ASAP. My problem with constant reminders is that after people see them, they tend to gloss over them and say "oh yeah, we get those errors all the time..."

Case in point -- in some of our code, we have introduced the idea of "skippable asserts" -- asserts which are there to let you know there is a problem, but allow our testers to move past them on into the rest of the code. We've come to find out that QA started saying things like "oh yeah, we get that assert all the time and we were told it was skippable" and bugs didn't get reported.

I guess what I'm suggesting is that there is another alternative, which is to fix the bugs that your test cases find immediately. There may be practical reasons not to do so, but getting in that habit now could be more beneficial in the long run.

like image 125
Mark Avatar answered Nov 16 '22 18:11

Mark