Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Does YAGNI also apply when writing tests?

When I write code I only write the functions I need as I need them.

Does this approach also apply to writing tests?

Should I write a test in advance for every use-case I can think of just to play it safe or should I only write tests for a use-case as I come upon it?

like image 562
Sruly Avatar asked Jun 03 '09 15:06

Sruly


People also ask

Should you write tests before code?

It often makes sense to write the test first and then write as much code as needed to allow the test to pass. Doing this moves towards a practice known as Test-Driven Development (TDD). Bluefruit uses a lot of TDD because it helps us to build the right product without waste and redundancies.

What is the practice of writing a test before the code is written?

TDD - Test Driven Development is a Development-Testing methodology that helps developers to achieve Speed, Robustness and Quality with its specifically structured mechanisms. It is a software development approach where a dev write unittests before the application code.


9 Answers

I think that when you write a method you should test both expected and potential error paths. This doesn't mean that you should expand your design to encompass every potential use -- leave that for when it's needed, but you should make sure that your tests have defined the expected behavior in the face of invalid parameters or other conditions.

YAGNI, as I understand it, means that you shouldn't develop features that are not yet needed. In that sense, you shouldn't write a test that drives you to develop code that's not needed. I suspect, though, that's not what you are asking about.

In this context I'd be more concerned with whether you should write tests that cover unexpected uses -- for example, errors due passing null or out of range parameters -- or repeating tests that only differ with respect to the data, not the functionality. In the former case, as I indicated above, I would say yes. Your tests will document the expected behavior of your method in the face of errors. This is important information to people who use your method.

In the latter case, I'm less able to give you a definitive answer. You certainly want your tests to remain DRY -- don't write a test that simply repeats another test even if it has different data. Alternatively, you may not discover potential design issues unless you exercise the edge cases of your data. A simple example is a method that computes a sum of two integers: what happens if you pass it maxint as both parameters? If you only have one test, then you may miss this behavior. Obviously, this is related to the previous point. Only you can be sure when a test is really needed or not.

like image 169
tvanfosson Avatar answered Oct 02 '22 05:10

tvanfosson


Write the test as you need it. Tests are code. Writing a bunch of (initially failing) tests up front breaks the red/fix/green cycle of TDD, and makes it harder to identify valid failures vs. unwritten code.

like image 36
GalacticCowboy Avatar answered Oct 02 '22 04:10

GalacticCowboy


Yes YAGNI absolutely applies to writing tests.

As an example, I, for one, do not write tests to check any Properties. I assume that properties work a certain way, and until I come to one that does something different from the norm, I won't have tests for them.

You should always consider the validity of writing any test. If there is no clear benefit to you in writing the test, then I would advise that you don't. However, this is clearly very subjective, since what you might think is not worth it someone else could think is very worth the effort.

Also, would I write tests to validate input? Absolutely. However, I would do it to a point. Say you have a function with 3 parameters that are ints and it returns a double. How many tests are you going to write around that function. I would use YAGNI here to determine which tests are going to get you a good ROI, and which are useless.

like image 32
Joseph Avatar answered Oct 02 '22 05:10

Joseph


You should write the tests for the use cases you are going to implement during this phase of development.

This gives the following benefits:

  1. Your tests help define the functionality of this phase.
  2. You know when you've completed this phase because all of your tests pass.
like image 27
ChrisF Avatar answered Oct 02 '22 03:10

ChrisF


You should write tests that cover all your code, ideally. Otherwise, the rest of your tests lose value, and you will in the end debug that piece of code repeatedly.

So, no. YAGNI does not include tests :)

like image 41
jAST Avatar answered Oct 02 '22 05:10

jAST


There is of course no point in writing tests for use cases you're not sure will get implemented at all - that much should be obvious to anyone.

For use cases you know will get implemented, test cases are subject to diminishing returns, i.e. trying to cover each and every possible obscure corner case is not a useful goal when you can cover all important and critical paths with half the work - assuming, of course, that the cost of overlooking a rarely occurring error is endurable; I would certainly not settle for anything less than 100% code and branch coverage when writing avionics software.

like image 20
Michael Borgwardt Avatar answered Oct 02 '22 05:10

Michael Borgwardt


You'll probably get some variance here, but generally, the goal of writing tests (to me) is to ensure that all your code is functioning as it should, without side effects, in a predictable fashion and without defects. In my mind, then, the approach you discuss of only writing tests for use cases as they are come upon does you no real good, and may in fact cause harm.

What if the particular use case for the unit under test that you ignore causes a serious defect in the final software? Has the time spent developing tests bought you anything in this scenario beyond a false sense of security?

(For the record, this is one of the issues I have with using code coverage to "measure" test quality -- it's a measurement that, if low, may give an indication that you're not testing enough, but if high, should not be used to assume that you are rock-solid. Get the common cases tested, the edge cases tested, then consider all the ifs, ands and buts of the unit and test them, too.)

Mild Update

I should note that I'm coming from possibly a different perspective than many here. I often find that I'm writing library-style code, that is, code which will be reused in multiple projects, for multiple different clients. As a result, it is generally impossible for me to say with any certainty that certain use cases simply won't happen. The best I can do is either document that they're not expected (and hence may require updating the tests afterward), or -- and this is my preference :) -- just writing the tests. I often find option #2 is for more livable on a day-to-day basis, simply because I have much more confidence when I'm reusing component X in new application Y. And confidence, in my mind, is what automated testing is all about.

like image 45
John Rudy Avatar answered Oct 02 '22 04:10

John Rudy


You should certainly hold off writing test cases for functionality you're not going to implement yet. Tests should only be written for existing functionality or functionality you're about to put in.

However, use cases are not the same as functionality. You only need to test the valid use cases that you've identified, but there's going to be a lot of other things that might happen, and you want to make sure those inputs get a reasonable response (which could well be an error message).

Obviously, you aren't going to get all the possible use cases; if you could, there'd be no need to worry about computer security. You should get at least the more plausible ones, and as problems come up you should add them to the use cases to test.

like image 21
David Thornley Avatar answered Oct 02 '22 05:10

David Thornley


I think the answer here is, as it is in so many places, it depends. If the contract that a function presents states that it does X, and I see that it's got associated unit tests, etc., I'm inclined to think it's a well-tested unit and use it as such, even if I don't use it that exact way elsewhere. If that particular usage pattern is untested, then I might get confusing or hard-to-trace errors. For this reason, I think a test should cover all (or most) of the defined, documented behavior of a unit.

If you choose to test more incrementally, I might add to the doc comments that the function is "only tested for [certain kinds of input], results for other inputs are undefined".

like image 29
Paul Fisher Avatar answered Oct 02 '22 03:10

Paul Fisher