I'm working on a C++ library that (among other stuff) has functions to read config files; and I want to add tests for this. So far, this has lead me to create lots of valid and invalid config files, each with only a few lines that test one specific functionality. But it has now got very unwieldy, as there are so many files, and also lots of small C++ test apps. Somehow this seems wrong to me :-) so do you have hints how to organise all these tests, the test apps, and the test data?
Note: the library's public API itself is not easily testable (it requires a config file as parameter). The juicy, bug-prone methods for actually reading and interpreting config values are private, so I don't see a way to test them directly?
So: would you stick with testing against real files; and if so, how would you organise all these files and apps so that they are still maintainable?
Perhaps the library could accept some kind of stream input, so you could pass in a string-like object and avoid all the input files? Or depending on the type of configuration, you could provide "get/setAttribute()" functions to directly, publicy, fiddle the parameters. If that is not really a design goal, then never mind. Data-driven unit tests are frowned upon in some places, but it is definitely better than nothing! I would probably lay out the code like this:
project/
src/
tests/
test1/
input/
test2
input/
In each testN directory you would have a cpp file associated to the config files in the input directory.
Then, assuming you are using an xUnit-style test library (cppunit, googletest, unittest++, or whatever) you can add various testXXX() functions to a single class to test out associated groups of functionality. That way you could cut out part of the lots-of-little-programs problem by grouping at least some tests together.
The only problem with this is if the library expects the config file to be called something specific, or to be in a specific place. That shouldn't be the case, but if it is would have to be worked around by copying your test file to the expected location.
And don't worry about lots of tests cluttering your project up, if they are tucked away in a tests directory then they won't bother anyone.
Part 1.
As Richard suggested, I'd take a look at the CPPUnit test framework. That will drive the location of your test framework to a certain extent.
Your tests could be in a parallel directory located at a high-level, as per Richard's example, or in test subdirectories or test directories parallel with the area you want to test.
Either way, please be consistent in the directory structure across the project! Especially in the case of tests being contained in a single high-level directory.
There's nothing worse than having to maintain a mental mapping of source code in a location such as:
/project/src/component_a/piece_2/this_bit
and having the test(s) located somewhere such as:
/project/test/the_first_components/connection_tests/test_a
And I've worked on projects where someone did that!
What a waste of wetware cycles! 8-O Talk about violating the Alexander's concept of Quality Without a Name.
Much better is having your tests consistently located w.r.t. location of the source code under test:
/project/test/component_a/piece_2/this_bit/test_a
Part 2
As for the API config files, make local copies of a reference config in each local test area as a part of the test env. setup that is run before executing a test. Don't sprinkle copies of config's (or data) all through your test tree.
HTH.
cheers,
Rob
BTW Really glad to see you asking this now when setting things up!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With