I am writing extensive tests for a new API via Jest and supertest. Prior to running the tests, I am setting up a test database and populating it with users:
jest --forceExit --config src/utils/testing/jest.config.js
module.exports = {
rootDir: process.cwd(),
// Sets up testing database with users
globalSetup: './src/utils/testing/jest.setup.js',
// Ensures connection to database for all test suites
setupTestFrameworkScriptFile: './src/utils/testing/jest.db.js',
}
So I am starting with a database of some users to test on. The problem is this:
Some of my tests rely upon the success of other tests. In this application, users can upload images, and group them into packs. So my grouping endpoint suite depends upon the success of my image upload suite, and so on.
I am well aware many people might say this is bad practice, and that tests should not rely upon other tests. That being said, I would really rather keep all my tests via supertest
, and not get into dependency injection, etc. I don't want to have to meticulously set up testing conditions (for example, creating a bunch of user images artificially before running the tests), because: (1) this is just duplication of logic, and (2) it increases the possibility of something breaking.
Is there any way to group jest suites? For example, to run suites in order:
jest run creationSuite
jest run modificationSuite
This way, all my "creationSuite" tests could be run simultaneously, and success of all would then trigger the "modificationSuite" to run, etc., in a fail-fast manner.
Alternatively, specifying inside a test suite dependencies on other test suites would be great:
describe('Grouping endpoint', () => {
// Somehow define dependencies
this.dependsOn(uploadSuite)
As of today there is “jest-runner-groups” package that allows you tag your test files and execute groups of tests with Jest. No need to mess with configs anymore. Just add a docblock to your test file with “@group” parameter and then use “ — group=yourgroup” in the command line.
Jest will execute different test files potentially in parallel, potentially in a different order from run to run. Per file, it will run all describe blocks first and then run tests in sequence, in the order it encountered them while executing the describe blocks.
Run multiple Jest tests in a file using .$ run-skip-jest-tests/node_modules/. bin/jest src/many-only-tests. test. js PASS src/many-only-tests.
What happens when running tests sequentially. To speed-up your tests, Jest can run them in parallel. By default, Jest will parallelise tests that are in different files.
You can use jest-runner-groups
to define and run tests in groups. Once it's installed and added to the Jest configuration, you can tag your tests using docblock notation, like here:
/**
* Foo tests
*
* @group group1/subgroup1
* @group unit/foo
*/
describe( 'Foo class', () => {
...
} );
/**
* Bar tests
*
* @group group1/subgroup2
* @group unit/bar
*/
describe( 'Bar class', () => {
...
} );
Update your jest configuration to specify a new runner:
// jest.config.js
module.exports = {
...
runner: "groups"
};
Then to run a specific group, you need to use --group=
argument:
// Using the Jest executable
jest --group=mygroup
// Or npm
npm test -- --group=mygroup
You can also use multiple --group
arguments to run multiple groups:
// Will execute tests in the unit/bar and unit/foo groups
npm test -- --group=unit/bar --group=unit/foo
// Will execute tests in the unit group (including unit/bar and unit/foo groups)
npm test -- --group=unit
I have done it with the --testNamePattern
flag. Here is the procedure.
Let’s say that you have two groups of tests:
DevTest
for testing in development environmentProdTest
for testing in production environemntIf you want to test only those features required for the development environment, you have to add DevTest
to the test description:
describe('(DevTest): Test in development environment', () => {
// Your test
})
describe('(ProdTest): Test in production environment', () => {
// Your test
})
describe('Test everywhere', () => {
// Your test
})
After that, you can add commands to your package.json
file:
"scripts": {
"test": "jest",
"test:prod": "jest --testNamePattern=ProdTest",
"test:dev": "jest --testNamePattern=DevTest",
"test:update": "jest --updateSnapshot"
}
Command npm test
will run all of your tests, since you are not using flag --testNamePattern
. If you want to run other tests, just use npm run test:dev
for example.
Be careful when naming your test groups though. They are searched with regex in test description and you don't want this command to match other words.
Jest test suites are executed in multiple threads, and this is one of its main benefits. Test runs can be completed much faster this way, but the test sequence isn't preserved by design.
It's possible to disable this feature with the runInBand
option.
It's possible to pick tests and suites based on their names with the testNamePattern
option or based on their paths with testPathPattern
option.
Since one suite depends on another one, they possibly could be combined into a single suite in the order they are expected to run. They can still reside in different files (make sure they aren't matched by Jest), e.g.:
// foobar.test.js
describe(..., () => {
require('foo.partial-test.js');
require('bar.partial-test.js');
});
The problem is this:
Some of my tests rely upon the success of other tests.
This is the real problem here. The approach that relies on previous a test state is considered flawed in any kind of automated tests.
I don't want to have to meticulously set up testing conditions (for example, creating a bunch of user images artificially before running the tests), because: (1) this is just duplication of logic, and (2) it increases the possibility of something breaking.
There is no need to set up testing conditions (fixtures) artificially. Fixtures can be extracted from existing environment, even from the results of your current tests if you're sure about their quality.
Redundancy and tautology naturally occur in automated tests, there's nothing wrong with them. Tests can be made DRYer with proper management of fixtures and shared code.
Quite the contrary, errors are always accumulated. A test that created faulty prerequisites may pass, but another test will fail, thus creating a debugging conundrum.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With