I've recently started using code coverage tools (particularily Emma and EclEmma), and I really like the view that it gives me as to the completeness of my unit tests - and the ability to see what areas of the code my unit tests aren't hitting at all. I currently work in an organization that doesn't do a lot of unit testing, and I plan on really pushing everyone to take on unit testing and code coverage and TDD and hopefully convert the organization.
One issue that I'm unsure of with this subject is exactly how far I should take my code coverage. For example, if I have a class such as this:
//this class is meant as a pseudo-enum - I'm stuck on Java 1.4 for time being
public final class BillingUnit {
public final static BillingUnit MONTH = new BillingUnit("month");
public final static BillingUnit YEAR = new BillingUnit("year");
private String value;
private BillingUnit(String value) {
this.value = value;
}
public String getValue() {
return this.value;
}
public boolean equals(Object obj) {
return value.equals(((BillingUnit) obj).getValue());
}
public int hashCode() {
return value.hashCode();
}
}
I wrote some simple unit tests to make sure that equals()
works correctly, that getValue()
returns what I expected, etc. But thanks to the visual nature of EclEmma, the hashcode()
method shows up as bright red for "not tested".
Is it worthwhile to even bother to test hashCode()
, in this example, considering how simple the implementation is? I feel like I would be adding a unit test for this method simply to bump the code coverage % up, and get rid of the glaring red highlight that EclEmma adds to these lines.
Maybe I'm being neurotic and OCD-like, but I find that using something like EclEmma that makes it so easy to see what is untested - the plugin highlights the source code in red, and covered code in green - really makes me want to push to get as many classes 100% green as a I can - even when it doesn't add much of a benefit.
I use code coverage to give me hints on places where I may have an incomplete set of tests. For example, I may write a test for some given functionality, then go develop the code that satisfies that functionality, but in doing so actually write code that does more than it is supposed to -- say it might catch an exception in an alternate case that the test doesn't exercise. When I use the coverage analyzer, I can see that I've introduced code that doesn't have an associated test. It helps me to know when I haven't written enough tests.
On the other hand, coverage analysis can lead to false security. Having all of your code covered does not mean that you have enough tests. You need to think about tests from the perspective of what should the code do and write tests to make sure that it does it. Preferably by writing the test first. Just because your code is completely covered does not mean that the code does what it is supposed to do.
In your example, I would have written the test for hashCode to define what the functionality of the method does, before I wrote the code. Therefore, I would have it covered. That doesn't mean that I always have 100% coverage. I'm not overly zealous about writing tests for simple accessors, for example. I also may not test methods from the parent class where I inherit from a framework, since I don't feel the need to test other people's code.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With