I'm confused about what the various testing appliances in Ruby on Rails are for. I have been using the framework for about 6 months but I've never understood the testing part of it. The only testing I've used is JUnit3 in Java and that only briefly.
Everything I've read about it just shows testing validations. Shouldn't the validations in rails just work? It seems more like testing the framework than testing the your code. Why would you need to test validations?
Furthermore, the tests seem super fragile to any change in your code. So if you change anything in your models, you have to change your tests and fixtures to match. Doesn't this violate the DRY principle?
Third, writing test code seems to take alot of time. Is that normal? Wouldn't it just be faster to refresh my browser and see if it worked? I already have to play with my application just to see if it flows correctly and make sure my CSS hasn't exploded. Why wouldn't manual testing be enough?
I've asked these questions before and I haven't gotten more than "automated testing is automated". I am smart enough to figure out the advantages of automating a task. My problem is that costs of writing tests seem absurdly high compared to the benefits. That said, any detailed response is welcome because I probably missed a benefit or two.
Shouldn't the validations in rails just work? It seems more like testing the framework than testing the your code. Why would you need to test validations?
The validations in Rails do work -- in fact, there are unit tests in the Rails codebase to ensure it. When you test a model's validation, you're testing the specifics of the validation: the length, the accepted values, etc. You're making sure the code was written as intended. Some validations are simple helpers and you may opt not to test them on the notion that "no one can mess up a validates_numericality_of
call." Is that true? Does every developer always remember to write it in the first place? Does every developer never accidentally delete a line on a bad copy paste? In my personal opinion, you don't need to test every last combination of values for a Rails' validation helper, but you need a line to test that it's there with the right values passed, just in case some punk changes it in the future without proper forethought.
Further, other validations are more complex, requiring lots of custom code -- they may warrant more thorough testing.
Furthermore, the tests seem super fragile to any change in your code. So if you change anything in your models, you have to change your tests and fixtures to match. Doesn't this violate the DRY principle?
I don't believe it violates DRY. They're communicating (that's what programming is, communication) two very different things. The test says the code should do something. The code says what it actually does. Testing is extremely important when there is a disconnect between those things.
Test code and application code are intimately linked, obviously. I think of them as two sides of a coin. You wouldn't want a front without a back, or a back without a front. Good test code reinforces good application code, and vice versa. The two together are used to understand the whole problem that you're trying to solve. And well written test code is documentation -- it shows how the application code should be used.
Third, writing test code seems to take alot of time. Is that normal? Wouldn't it just be faster to refresh my browser and see if it worked? I already have to play with my application just to see if it flows correctly and make sure my CSS hasn't exploded. Why wouldn't manual testing be enough?
You've only worked on very small projects, for which that testing is arguably sufficient. However, when you work on a project with several developers, thousands or tens of thousands of lines of code, integration points with web services, third party libraries, multiple databases, months of development and requirements changes, etc, there are a lot of other factors in play. Manual testing is simply not enough. In a project of any real complexity, changes in one place can often have unforeseen results in others. Proper architecture helps mitigate this problem, but automated testing helps as well (and helps identify points where the architecture can be improved) by identifying when a change in one place breaks another.
My problem is that costs of writing tests seem absurdly high compared to the benefits. That said, any detailed response is welcome because I probably missed a benefit or two.
I'll list a few more benefits.
If you test first (Test Driven Development) your code will probably be better. I haven't met a programmer who gave it a solid shot for whom this wasn't the case. Testing first forces you to think about the problem and actually design your solution, versus hacking it out. Further, it forces you to understand the problem domain well enough to where if you do have to hack it out, you know your code works within the limitations you've defined.
If you have full test coverage, you can refactor with NO RISK. If a software problem is very complicated (again, real world projects that last for months tend to be complicated) then you may wish to simplify code that has previously been written. So, you can write new code to replace the old code, and if it passes all of your tests, you're done. It does exactly what the old code did with respect to the tests. For a project that plans to use an agile development method, refactoring is absolutely essential. Changes will always need to be made.
To sum up, automated testing, especially test driven development, is basically a method of managing the complexity of software development. If your project isn't very complex, the cost may outweigh the benefits (although I doubt it). However, real world projects tend to be very complex, and the results of testing and TDD speak for themselves: they work.
(If you're curious, I find Dan North's article on Behavior Driven Development to be very helpful in understanding a lot of the value in testing: http://dannorth.net/introducing-bdd)
Tests should validate your application logic. Personally, I think my most important tests are the ones I run in Selenium. They check that what shows up in the browser is actually what I expect to see. However, if that's all I had, then I would find it hard to debug - it helps to have lower level tests as well and integration, functional, and unit tests are all useful tools. Unit tests let you check that the model behaves the way you expect it to (and that means every method, not just validatins). Validatins will certainly Just Work, but only if you get them right. If you get them wrong, they will Just Work, but not the way you expected. Writing a couple of lines of test is quicker than debugging later on.
A simple example like the one at http://wiseheartdesign.com/2006/01/16/testing-rails-validations just checks validations in a unit test. The O'Reilly article at http://www.oreillynet.com/pub/a/ruby/2007/06/07/rails-testing-not-just-for-the-paranoid.html?page=1 is a bit more complete (though still fairly basic).
Automated testing is particularly useful in regression testing where you change something and run a suite of tests to check that you didn't break anything else.
Tests are a form of repetition, but they don't violate DRY because they express things in a different way. A test says "I did X so Y should happen". Code says "X happened, so now I need to do Z, which happens to cause Y to happen". i.e. a test stimulates a cause and checks an effect, while code responds to a cause, and effects something.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With