Just some days ago I started looking into a unit test framework called check, and I intend to run the test on c code under Linux.
Now check and some well designed code and some test code can help me to verify that the basic functionality is correct, I mean it is quite easy to just look at the variables in and response back and then decide if a function is correct or not.
But let's say I want to test a dynamic memory structure with a lot off malloc and free, and it turns out that I can put data in and get correct data back out again. But that does not prove that I have not broken some memory in the process, let's say I forgot to free half off the memory and lost the pointers (a classical memleak). That code would probably pass most of the unit testing.
So now for the question: is it a good idea to run the entire unit test code with i.e. Valgrind and let him detect any malloc/free problems? (Or maybe compile in something like Electric Fence?)
It feels like a good idea, but I'm not sure what I'm getting myself into here.....
Thanks Johan
Update: Thanks Douglas and Jonathan, it seems like this is a good idea and something I should continue with :-)
Update: Valgrind is a fun tool, however the first memleaks I found doing this was in the test framework and not my own code (quite funny though). So a tip to the rest out there is to verify that the unit test framework that you are using is not leaking, before turning your own code upside down. An empty test case was all that was needed in my case, since then nothing but the unit test framework is running.
Unit testing ensures that all code meets quality standards before it's deployed. This ensures a reliable engineering environment where quality is paramount. Over the course of the product development life cycle, unit testing saves time and money, and helps developers write better code, more efficiently.
Unit tests are incredibly important to us as developers because they allow us to demonstrate the correctness of the code we've written. More importantly, unit tests allow us to make updates to our code base with confidence that we haven't broken anything.
Unit testing is the first testing phase and it is practiced before moving to the phase of integration testing. Hence, before moving for the next testing level, make sure to fix all the identified bugs in the unit testing phase.
We certainly do - it's much easier to run valgrind against the unit tests than with the full program.
Also any memory errors are localised to the area of code the unit test is testing which makes it easier to fix.
Plus checking that you've fixed it is easier - because you're running the unit test not a more complicated test against your full program.
If you're running valgrind in an automated fashion you probably want --error-exitcode=<number> [default: 0]
Specifies an alternative exit code to return if Valgrind reported any errors in the run. When set to the default value (zero), the return value from Valgrind will always be the return value of the process being simulated. When set to a nonzero value, that value is returned instead, if Valgrind detects any errors. This is useful for using Valgrind as part of an automated test suite, since it makes it easy to detect test cases for which Valgrind has reported errors, just by inspecting return codes.
http://valgrind.org/docs/manual/manual-core.html#manual-core.erropts
As Douglas Leeder said, it is well worth running your unit tests with any diagnostic software that you can lay hands that will ensure that it really does work as you expect. That includes not abusing memory, so using valgrind is a good idea.
You really want your unit tests to prove that your code works.
You don't have to run them under valgrind all the time - but it should be as trivial as possible to do so, and you should do so periodically (say after big changes).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With