Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Test driven development for signal processing libraries

I work with audio manipulation, generally using Matlab for prototyping, and C++ for implementation. Recently, I have been reading up on TDD. I have looked over a few basic examples and am quite enthusiastic about the paradigm.

At the moment, I use what I would consider a global 'test-assisted' approach. For this, I write signal processing blocks in C++, and then I make a simple Matlab mex file that can interface with my classes. I subsequently add functionality, checking that the results match up with an equivalent Matlab script as I go. This works ok, but the tests become obsolete quickly as the system evolves. Furtermore, I am testing the whole system, not just units.

It would be nice to use an established TDD framework where I can have a test suite, but I don't see how I can validate the functionality of the processing blocks without tests that are equally as complex as the code under test. How would I generate the reference signals in a C++ test to validate a processing block without the test being a form of self-fulfilling prophecy?

If anyone has experience in this area, or can suggest some methodologies that I could read into, then that would be great.

like image 213
learnvst Avatar asked May 29 '12 09:05

learnvst


2 Answers

I think it's great to apply the TDD approach to signal processing (it would have saved me months of time if I knew about it years ago when I was doing signal processing myself). I think the key is to break down your system into the lowest level components that can be independently tested, eg:

  • FFTs: test signals at known frequencies: DC, Fs/Nfft, Fs/2 and different phases etc. Check the peaks and phase are as you expect, check the normalisation constant is as you expect
  • peak picking: test that you correctly find maxima/minima
  • Filters: generate input at known frequencies and check the output amplitude and phase is as expected.

You are unlikely to get exactly the same results out between C++ and Matlab, so you'll have to supply error bounds on some of the tests. TDD is a great way of not only verifying the correctness of the code you have but is really useful when trying out different implementations. For example if you want to replace one FFT implementation with another, there are often slight differences with the way the data is packed, or the normalisation constant that is used. TDD will give you a high degree of confidence the new library is correctly integrated.

like image 143
the_mandrill Avatar answered Sep 19 '22 07:09

the_mandrill


I do something similar for heuristics detection, and we have loads and loads of capture files and a framework to be able to load and inject them for testing. Do you have the possibility to capture the reference signals in a file and do the same?

As for my 2 cents regarding TDD, its a great way to develop, but as with most paradigms, you dont always have to follow it to the letter, there are times when you should know how to bend the rules a bit, so as not to write too much throw-away code/tests. I read about one approach that said absolutely no code should be written until a test is developed, which at times can be way too strict.

On the other hand, I always like to say: "If its not tested, its broken" :)

like image 44
Brady Avatar answered Sep 20 '22 07:09

Brady