In order to avoid too much testing, I would like to provide the Quality Assurance (QA) team with hints on which features have to be regression tested after a development iteration. Do you know tools that could do that on a C++ and Subversion (and visual studio) dev environment ?
Details about the use case:
Very probably this tool would use static code analysis and consume subversion APIs. But does it exist ?
Regression Testing is defined as a type of software testing to confirm that a recent program or code change has not adversely affected existing features. Regression Testing is nothing but a full or partial selection of already executed test cases which are re-executed to ensure existing functionalities work fine.
It can be a real challenge to reduce regression testing, because it is a vital testing procedure that seeks out software errors by retesting the entire software system.
Prune the Pack: Regression tests take up time and resources, so it's important to remove any tests that check if old features work or cover obsolete versions of the software. You should only be testing what really matters. Use the Insights: Regression tests are only as useful as test engineers and developers make them.
G'day,
What you are describing isn't really regression testing. You're just testing new features.
Regression testing is where you specifically run your complete test suite to see if the code supporting your new feature has broken previously working code.
I'd highly recommend reading Martin Fowler's excellent paper "Continuous Integration" which covers some of the aspects you are talking about.
It may also provide you with a better way of working, specifically the CI aspects Martin talks about in his paper.
Edit: Especially because CI has some hidden little traps that are obvious in hindsight. Such things as stopping testers trying to test a version that has not had all the files implementing a new feature committed yet. (You verify that there have been no commits in the last five minutes).
Another big point is the loss of time if you have a broken build and aren't aware that it is broken until someone checks out the code and then tries to build it so that they can test it.
If it's broken, you now have:
The basic idea of CI is to do several builds of the complete product during the day so that you trap a broken build as early as possible. You may even select a few tests to check that the basic functionality of your product is still working. Once again to notify as soon as possible that there is a problem with the current state of your build.
Edit: As for your question, what about tagging the repository when you've done your testing, e.g. TESTS_COMPLETE_2009_12_16. Then when you're ready to work out what the next set of tests do an "svn diff -r" between that latest tests finished tag and HEAD?
HTH
BTW I'll update this answer with some further suggestions as I think of them.
cheers,
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With