Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Skip unit tests that take a long time

I'm working with MS-Test for my unit tests writing. most of my tests have a duration of less than 0.1 seconds. I want to somehow tell VS "ignore the tests that take a long time to run when i'm running the tests manually, and when you run them on the build do not ignore them.

I know that in nunit there is an "explicit" attribute that will run the test only if you select him explicitly. I think that might help but not sure.

Help please

like image 737
Chen Kinnrot Avatar asked Sep 21 '10 12:09

Chen Kinnrot


People also ask

How long is too long for a unit test?

Still, it seems as though a 10 second short-term attention span is more or less hard-wired into the human brain. Thus, a unit test suite used for TDD should run in less than 10 seconds. If it's slower, you'll be less productive because you'll constantly lose focus.

Can we skip unit testing?

So don't skip unit tests just because you have an integration test and don't skip integration tests because you have unit tests. One trick you can do to save some code is to make your tests work both as unit and integration by allowing them to mock or not mock dependencies depending on an input or setting.

How long should unit tests take?

Typical time budgeted on writing unit tests is about 1 day for every feature that takes 3-4 days of heads down coding. But that can vary with a lot of factors. 99% code coverage is great. Unit tests are great.


1 Answers

Thinking about this question in a language-agnostic, framework-agnostic manner yields that what you ask for is somewhat a conundrum:

The test tool will have no idea about the execution time of any of the unit tests until they are run; because this is dependant on not just the test tool and the tests themselves, but also on the application under test. The stop-gap solution to this would be to do things such as setting a time limit. If you do this, then that begs the question, when a test times out, should it be passed, failed, or perhaps fall into some other (third) category? ... Conundrum!

Thus to avoid this, I put forward that you should adopt a different strategy where you as the developer decide which subsets of the entire set of tests you wish to run in different situations. For example:

  • A set of smoke tests;
    • i.e. tests that you would want to run first all of the time. If any of these fail then you don't want to bother executing any of the tests. Put only the really fundamental tests in this group.
  • A minimal set of tests;
    • For your specific requirement this would be a set of tests containing all of the "quick" or "fast" tests, and you determine which ones they are.
  • A comprehensive set of tests;
    • The tests which do not belong to any of the other categories. For your specific requirement this would be tests that are the "slow" or "long" ones.

When running your tests, you can then choose which of these subsets of tests to run, perhaps configuring it in some form of a script.

I use this approach to great effect in automated testing (integrated into a continuous integration system). I do this by having a script that, depending on the input parameters, would decide either to execute just the smoke tests plus the minimal tests; or alternatively the smoke tests, the minimal tests and the comprehensive tests (i.e. all of them).

HTH

like image 186
bguiz Avatar answered Oct 06 '22 05:10

bguiz