I have a test suite using testthat
containing several files in R/tests
and I would like to test them in parallel to speed up testing. Are there any implemented methods in devtools
, testthat
, or elsewhere towards this end?
I tried doing it "manually" using the future
packages, but the text rendering of stdout
is unreadable:
# Get a vector of test files without "test-" and ".R"
test_files = list.files("tests/testthat", "test-")
test_filters = stringr::str_replace_all(test_files, c("test-|\\.R"), "")
# Run test for each file in parallel
future::plan(future::multiprocess)
future.apply::future_mapply(devtools::test, filter = test_filters)
With the caveat of a long-time user of RUnit
who more recently switched to tinytest
, the feature you are looking for exists in tinytest
already. I would think that someone has or may build a parallel test runner for testthat
at some point, but in the 'here and now' we do have tinytest
with very fine behavior, good documentation and leads for converting from RUnit
or testthat
.
My favourite features of tinytest
are the default installation of tests in the package, the lack of other dependencies and the parallel runner.
And another caveat coming: I like the command-line for this way more than an R prompt because there may always be some side-effects of some form. So I added a little test runner wrappre tt.r to littler:
edd@rob:~$ tt.r -h
Usage: tt.r [-h] [-x] [-a] [-b] [-d] [-f] [-n NCPUS] [-p] [-s] [-z] [ARG...]
-a --all use test_all mode [default: FALSE]
-b --build use build-install-test mode [default: FALSE]
-d --directory use directory mode [default: FALSE]
-f --file use file mode [default: FALSE]
-n --ncpus NCPUS use 'ncpus' in parallel [default: getOption]
-p --package use package mode [default: FALSE]
-s --silent use silent and do not print result [default: FALSE]
-z --effects suppress side effects [default: FALSE]
-h --help show this help text
-x --usage show help and short example usage
edd@rob:~$
(I should add here that writing such wrapper is easy thanks to docopt
.)
And then we simply do
edd@rob:~$ tt.r -n 4 -p anytime
starting worker pid=642068 on localhost:11092 at 17:11:25.636
starting worker pid=642067 on localhost:11092 at 17:11:25.654
starting worker pid=642065 on localhost:11092 at 17:11:25.687
starting worker pid=642066 on localhost:11092 at 17:11:25.689
Running test_gh_issue_12.R............ 2 tests OK
Running test_gh_issue_56.R............ 7 tests OK
Running test_gh_issue_33.R............ 2 tests OK
Running test_all_formats.R............ 0 tests ris or Windows or Release
Running test_assertions.R............. 2 tests OK
Running test_calc_unique.R............ 4 tests OK
Running test_gh_issue_100.R........... 2 tests OK
Running test_simple.R................. 34 tests OK
Running test_utilities.R.............. 2 tests OK
Running test_bulk.R................... 2328 tests OK
[1] "All ok, 2383 results"
edd@rob:~$
You see a little bit of output got swallowed there.
You can of course also run this by hand from R:
R> tinytest::test_package("anytime", ncpu=4)
starting worker pid=651865 on localhost:11762 at 17:14:45.970
starting worker pid=651864 on localhost:11762 at 17:14:45.980
starting worker pid=651863 on localhost:11762 at 17:14:45.980
starting worker pid=651862 on localhost:11762 at 17:14:45.984
Running test_gh_issue_12.R............ 2 tests
Exited 'test_all_formats.R' at line 24. Skipping Solaris or Windows or ReleaseOK
Running test_all_formats.R............ 0 tests
Running test_gh_issue_56.R............ 7 tests OK
Running test_assertions.R............. 2 tests OK
Running test_gh_issue_33.R............ 2 tests OK
Running test_calc_unique.R............ 4 tests OK
Running test_gh_issue_100.R........... 2 tests OK
Running test_simple.R................. 34 tests OK
Running test_utilities.R.............. 2 tests OK
Running test_bulk.R................... 2328 tests OK
[1] "All ok, 2383 results"
R>
There are other runners for file, directory, a build+install+test cycle and more. And hey if after all of this you still don't like it Mark will give you your money back :)
PS Here and eg in Rcpp
I have some tests "dimmed" because they produce an ungodly amount of cmdline noise so that only happens in package tests when the opt-in var is set. Hence the few 'zero tests run' above. That is my setup and not a tinytest
issue.
The new testthat >= 3.0
introduces parallel tests. You just need to add to your description
Config/testthat/parallel: true
Config/testthat/edition: 3
and devtools::test()
should work in parallel.
See the parallel vignette. for more information
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With