I am involved with a project which must, among other things, controlling various laboratory instruments (robots, readers, etc...)
Most of these instruments are controlled either through DCOM-based drivers, the serial port, or by launching proprietary programs with various arguments. Some of these programs or drivers include simulation mode, some don't. Obviously, my development computer cannot be connected to all of the instruments, and while I can fire up virtual machines for the instruments whose drivers include a simulation mode, some stuff cannot be tested without the actual instrument.
Now, my own code is mostly not about the actual operations on the instruments, but about starting operations, making sure everything is fine, and synchronising between the lot of them. It is written in Java, using various libraries to interface with the instruments and their drivers.
I want to write unit tests for the various instrument control modules. However, because the instruments can fail in many ways (some of which are documented, some of which aren't), because the code depends on these partially random outputs, I am a bit lost regarding how to write unit tests for these parts of my code. I have considered the following solutions:
While I am currently thinking of going with the latter, am I missing something? Is there a better way to do this?
You two bullet points are both valid options, but they each represent two different kinds of testing.
At a very high level, using Mock objects (per your second bullet point) is great for Unit Testing -- which is simply testing your code (which is the System Under Test, or SUT), and nothing else extraneous to it. Any other dependencies are Mocked out. You can then write test cases to throw as many different error conditions as you can think of (as well as testing the "happy path," of course). The fact that your domain of error conditions is undocumented is unfortunate, and something that you should work to curtail as best as possible. Every time you run into a new error condition with the actual external device, you should figure out how to reproduce it via code, and then write another new unit test for recreating that condition through your mock framework.
Further, testing with the actual instruments connected (per your first bullet point) is great for Integration Testing -- which is more testing your code alongside the actual external dependencies.
In general, Unit Testing should be quick (ideally, under 10 minutes to compile your code and run your entire unit test suite.) This means that you'll get feedback quickly from your unit tests, should any new code you've written cause any tests to fail. Integration Testing, by its nature, can take longer (if, for example, one of your external devices takes 1 minute to compute a result or perform a task, and you have 15 different sets of inputs you're testing, that's 15 minutes right there for one small suite of tests.) Your CI server (you should have one of those that automatically compiles and runs all tests) should automatically be triggered upon commit to your source control repository. It should compile and run the unit tests as one step. After that part is done, it should provide you feedback (good or bad), and then if the unit tests all pass, it should automatically kick off your integration tests. This assumes that there is either an actual device connected to your CI server, or a suitable replacement (whatever that means in your particular environment.)
Hope that helps.
If you're using mocks then you can substitute different mocks to perform differently. That is, your tests will be consistent. That's valuable since running tests against a randomly performing system is not going to give you a sense of security. Each run can/will execute a different code path.
Since you don't know all the failure scenarios in advance, I think there are two (non-exclusive) scenarios:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With