I was wondering if anyone had any idea about how one would go about re-runing failed Junit tests in the same run through. For example, tests 1-5 are run and all pass then when test 6 is run, and it fails the first time. It would then automatically be run again a second time before moving on to tests 7. I am using an ant script that runs all of my tests. The tests are run on a Hudson box if that helps at all. I read about being able to select the failed test and put them in a new file where they are run the second time the suit is run, but thats not really what I am looking for.
Any help or pointers in the right direction would be helpfull. Thank you.
<!-- ============================= -->
<!-- target: test-regression-all -->
<!-- ============================= -->
<!--
<target name="test-regression-all" description="Runs all tests tagged as regression" depends="compile">
<mkdir dir="${target.reports.dir}"/>
<junit printsummary="yes" haltonerror="no" haltonfailure="no" fork="yes"
failureproperty="junit.failure" errorproperty="junit.error" showoutput="true">
<formatter type="xml"/>
<classpath>
<pathelement location="${target.build.classes.dir}"/>
<path refid="classpath"/>
</classpath>
<batchtest todir="${target.reports.dir}">
<fileset dir="${src.dir}">
<include name="emailMarketing/AssetLibrary/*.java" />
<include name="emailMarketing/attributes/*.java" />
<include name="emailMarketing/contacts/*.java" />
<include name="emailMarketing/DomainKeys/*.java" />
<include name="emailMarketing/lists/*.java" />
<include name="emailMarketing/messages/*.java" />
<include name="emailMarketing/Segments/*.java" />
<include name="emailMarketing/UploadContact/*.java" />
<exclude name="emailMarketing/lists/ListArchive.java"/>
<exclude name="emailMarketing/messages/MessageCreation.java" />
</fileset>
</batchtest>
<jvmarg value="-Duser=${user}"/>
<jvmarg value="-Dpw=${pw}"/>
<jvmarg value="-Dbrowser=${browser}"/>
<jvmarg value="-Dserver=${server}"/>
<jvmarg value="-Dopen=${open}"/>
<jvmarg value="-DtestType=regression"/>
</junit>
<junitreport todir="${target.reports.dir}">
<fileset dir="${target.reports.dir}">
<include name="TEST-*.xml"/>
</fileset>
<report todir="${target.reports.dir}"/>
</junitreport>
<fail if="junit.failure" message="Test(s) failed. See reports!"/>
<fail if="junit.error" message="Test(s) errored. See reports!"/>
</target>
Take a look at the Ant Retry task.
<target name="myTest1">
<mkdir dir="${junit.output.dir}" />
<retry retrycount="3">
<junit haltonerror="yes" haltonfailure="yes"
fork="no" printsummary="withOutAndErr"
showoutput="true" tempdir="c:/tmp">
<formatter type="xml" />
<test name="MyPackage.myTest1" todir="${junit.output.dir}" />
<classpath refid="Libs.classpath" />
<formatter type="brief" usefile="false"/>
</junit>
</retry>
</target>
Tests should be deterministic, such that errors are reproducible. Hence immediately rerunning a failed test will fail again.
Tests should be independent, i.e. each one should make its own setup (and teardown). With junit, you usually do not have a specific order in which the tests are executed. Hence it is not necessary to rerun test6 for setting up the environment for test7.
If you want test case prioritization, i.e. start with the failed tests when rerunning tests after a code fix:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With