I'm having a moral dilemma. I have some value objects in my application, which are immutable and extremely simple. I've generated the equals and hashcode with an IDE (intellij in my case) but doing that, made the code coverage drop, plus the reports now indicate that those value objects are very complex (using the cyclomatic complexity metric) when in fact they're dead simple.
As an example, the following equals is in a value object that has 3 immutable attributes. The Code complexity is 14 (javaNCSS) and it has 26 execution branches (Cobertura). I should add too, that I fail the build if any method has a complexity greater than 10.
@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
TranscriptTaskDetails that = (TranscriptTaskDetails) o;
if (inputFile != null ? !inputFile.equals(that.inputFile) : that.inputFile != null) {
return false;
}
if (language != that.language) {
return false;
}
if (outputFile != null ? !outputFile.equals(that.outputFile) : that.outputFile != null) {
return false;
}
return true;
}
I'm wondering what other devs use to circumvent this, as I pay quite a lot of attention to the complexity reports, as in my experience a high complexity metric relates to more bugs, so this auto-generated equals and hashcodes are polluting the reports.
I'm thinking of using EqualsBuilder and HashcodeBuilder from apache commons-lang to circumvent this, but I'm not 100% happy :S.
I should have added that part of the code I'm writing for this project is a library that will be used by other business units... And will be maintained by a different team too :S.
I'm thinking of using EqualsBuilder and HashcodeBuilder from apache commons-lang to circumvent this, but I'm not 100% happy :S.
Why not use these?
Using them reduces the size and complexity of your own methods and it's much easier to verify visually that equals and hashCode are implemented consistently.
It's of course also a good idea and pretty easy to test that the equals/hashCode contract is satisfied, so by all means, write tests.
I think that your real problem is placing too much faith in artificial measure such as code coverage cyclomatic complexity.
The plain facts are that:
Learn to trust your own judgment, and stop relying on tools to make your design decisions for you.
And the corollary of what I just said is that if you think you can (and should) simplify the generated code while still ensuring that it is correct for your current and projected use cases, go ahead and simplify it.
By the way, a generated equals / hashcode pair are probably more likely to be more correct than a hand written one ... written by an "average bear" programmer. For instance, note the careful way that the generated code handles null fields and the comparison of the types. A lot of developers wouldn't get those right.
There's a contradiction here. You don't have any reason at all to worry about the complexity of auto-generated code. It's the hand coding you should be worrying about.
How high is your code coverage? Some people argue that shooting for 100% is a sign of anal retentive tendencies. If you're in the 70-80% range, and you know that what you haven't tested isn't a problem, then why worry about it?
On the other hand, these tests aren't that difficult to write. Why not write them, be done with it, and sleep at night ? You would have finished the test in the time it took to type your moral dilemma here and waiting for answers.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With