I've written a python module, much of which is wrapped in @numba.jit
decorators for speed. I've also written lots of tests for this module, which I run (on Travis-CI) with py.test
. Now, I'm trying to look at the coverage of these tests, using pytest-cov
, which is just a plugin that relies on coverage
(with hopes of integrating all of this will coveralls).
Unfortunately, it seems that using numba.jit
on all those functions makes coverage
think that the functions are never used -- which is kind of the case. So I'm getting basically no reported coverage with my tests. This isn't a huge surprise, since numba
is taking that code and compiling it, so the code itself really never is used. But I was hoping there'd be some of that magic you see with python some times...
Is there any useful way to combine these two excellent tools? Failing that, is there any other tool I could use to measure coverage with numba?
[I've made a minimal working example showing the difference here.)
The best thing might be to disable the numba JIT during coverage measurement. That relies on you trusting the correspondence between the Python code and the JIT'ed code, but you need to trust that to some extent anyway.
Not that this answers the question, but I thought I should advertise another way that someone might be interested in working on. There's probably something really beautiful that could be done using llvm-cov
. Presumably, this would have to be implemented within numba, and the llvm code would have to be instrumented, which would require some flag somewhere. But since numba knows about the correspondence between lines of python code and llvm code, there must be something that could be implemented by somebody more clever than I am.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With