Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why do I need to know how many tests I will be running with Test::More?

Tags:

Am I a bad person if I use use Test::More qw(no_plan)?

The Test::More POD says

Before anything else, you need a testing plan. This basically declares how many tests your script is going to run to protect against premature failure...

use Test::More tests => 23;

There are rare cases when you will not know beforehand how many tests your script is going to run. In this case, you can declare that you have no plan. (Try to avoid using this as it weakens your test.)

use Test::More qw(no_plan);

But premature failure can be easily seen when there are no results printed at the end of a test run. It just doesn't seem that helpful.

So I have 3 questions:

  1. What is the reasoning behind requiring a test plan by default?
  2. Has anyone found this a useful and time saving feature in the long run?
  3. Do other test suites for other languages support this kind of thing?
like image 910
Eric Johnson Avatar asked Mar 27 '09 15:03

Eric Johnson


People also ask

How many user tests should you do?

For really low-overhead projects, it's often optimal to test as few as 2 users per study. For some other projects, 8 users — or sometimes even more — might be better. For most projects, however, you should stay with the tried-and-true: 5 users per usability test.

Why is it important for unit tests to run quickly?

The speed of detecting non-working code depends on the tools used for continuous integration. Tests can be set to run either a one-time check at a certain time interval or can be run immediately in real-time to review changes. In short, unit tests help developers detect problems immediately, then fix them quickly.

How many tests can you do on Usertesting?

It usually varies. You can expect 1-2 tests to appear daily on your dashboard. Another factor that impacts the number of tests that you receive is your rating (testers with 5-star ratings tend to receive more tests), profile, and devices you own.

Why is user testing necessary even after the entire system may have been tested?

User testing will show you exactly which parts of your design frustrate people, where they get confused, and what keeps them from converting. It's a perfect complement to A/B testing and analytics, because it provides insights into why your users do what they do.


2 Answers

What is the reason for requiring a test plan by default?

ysth's answer links to a great discussion of this issue which includes comments by Michael Schwern and Ovid who are the Test::More and Test::Most maintainers respectively. Apparently this comes up every once in a while on the perl-qa list and is a bit of a contentious issue. Here are the highlights:

Reasons to not use a test plan

  1. Its annoying and takes time.
  2. Its not worth the time because test scripts won't die without the test harness noticing except in some rare cases.
  3. Test::More can count tests as they happen
  4. If you use a test plan and need to skip tests, then you have the additional pain of needing a SKIP{} block.

Reasons to use a test plan

  1. It only takes a few seconds to do. If it takes longer, your test logic is too complex.
  2. If there is an exit(0) in the code somewhere, your test will complete successfully without running the remaining test cases. An observant human may notice the screen output doesn't look right, but in an automated test suite it could go unnoticed.
  3. A developer might accidentally write test logic so that some tests never run.
  4. You can't really have a progress bar without knowing ahead of time how many tests will be run. This is difficult to do through introspection alone.

The alternative

Test::Simple, Test::More, and Test::Most have a done_testing() method which should be called at the end of the test script. This is the approach I take currently.

This fixes the problem where code has an exit(0) in it. It doesn't fix the problem of logic which unintentionally skips tests though.

In short, its safer to use a plan, but the chances of this actually saving the day are low unless your test suites are complicated (and they should not be complicated).

So using done_testing() is a middle ground. Its probably not a huge deal whatever your preference.

Has this feature been useful to anyone in the real world?

A few people mention that this feature has been useful to them in the real word. This includes Larry Wall. Michael Schwern says the feature originates with Larry, more than 20 years ago.

Do other languages have this feature?

None of the xUnit type testing suites has the test plan feature. I haven't come across any examples of this feature being used in any other programming language.

like image 89
Eric Johnson Avatar answered Oct 10 '22 13:10

Eric Johnson


I'm not sure what you are really asking because the documentation extract seems to answer it. I want to know if all my tests ran. However, I don't find that useful until the test suite stabilizes.

While developing, I use no_plan because I'm constantly adding to the test suite. As things stabilize, I verify the number of tests that should run and update the plan. Some people mention the "test harness" catching that already, but there is no such thing as "the test harness". There's the one that most modules use by default because that's what MakeMaker or Module::Build specify, but the TAP output is independent of any particular TAP consumer.

A couple of people have mentioned situations where the number of tests might vary. I figure out the tests however I need to compute the number then use that in the plan. It also helps to have small test files that target very specific functionality so the number of tests is low.

 use vars qw( $tests );  BEGIN {   $tests = ...; # figure it out    use Test::More tests => $tests;   } 

You can also separate the count from the loading:

 use Test::More;  plan tests => $tests; 

The latest TAP lets you put the plan at the end too.

like image 41
brian d foy Avatar answered Oct 10 '22 12:10

brian d foy