My questions is mostly about testing methodology. I am working for an organization which practices TDD (Test Driven Development). We are using AngularJS and hence its full testing stack - Jasmine for unit tests and Protractor for e2e testing.
When developing a feature our process begins by first writing a failing e2e tests and then writing the feature our using TDD. The tests are only written for public methods (be it for controller/directives/services). The product it self does not contain any complex logic (besides couple of exceptions). Recently we began discussing the fact that there is no point in writing unit tests for controllers, as they are exposing functionality, 100% of it is exposed to the view and it tested using e2e tests anyway. Basically - unit tests and e2e tests are overlapping. At first we all agreed, but then this decision opened a Pandora box. After all, the same thing could be said about directives. So why test them aswell? Then the question of services came up. Most of them (98%) simply make a back-end call and return the response. So why not simply mock httpBackend and test the services while testing the controllers, which are tested through e2e.
You get the drift....
I do see benefit in doing both unit tests and e2e tests, despite them practically overlapping. Mainly - instant feedback and "executable documentation". What are you practicing? Do you see other benefits and is "juice worth the squeeze" - is it worth writing overlapping tests for the simplest of implementations just to get those two benefits above?
This is a big topic and not something that can really have an authoritative answer, but I'll do my best to cover a few points.
First, you should be thinking about the purpose of the tests. According to the Agile Testing Quadrants, unit testing exists primarily to support the team. They are generally written close to the product (eg. using TDD, probably by the developers themselves) and serve to increase the developer's confidence that they haven't broken anything with that last change. With this confidence, developers can work efficiently and refactor with reckless abadon - the TDD dream. Unit tests don't answer the question "Is this fit for our customer's purpose", but that's not why they are there.
Functional testing (e2e, if I understand your description) still supports the team with fast turn around of test results but actually does start to answer the question "Can a user do the thing?". You're testing what the user sees and starting to test your actual product in a way that is meaningful to users.
Quadrants 3 and 4 start to address the whether or not the product does the thing well (ie. is it fit for purpose and not just functional), but that's another subject.
Based on this understanding of testing, part of the answer depends on your team structure. Do you have a separate dev and test team? If so, it may make sense for your devs to write unit tests (they're for their benefit after all) and for the test team to handle the other quadrants independently (including writing e2e as they see fit). And if your test and dev team are the same? If you can get a similar turnaround time (test written -> useful result) out of your functional/e2e tests as you can from unit tests, it may make sense to focus on them and reap the rewards for both methods without overlap.
The short answer I'd give is to simply ask "What benefit are we getting out of this test?". If you find your answers for tests overlapping, it may make sense to remove the redundancy.
Some of the above points and a few more are discussed here, so I'll stop rambling for now.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With