We have an application with hundreds of possible user actions, and think about how enhancing memory leak testing.
Currently, here's the way it happens: When manually testing the software, if it appears that our application consumes too much memory, we use a memory tool, find the cause and fix it. It's a rather slow and not efficient process: the problems are discovered late and it relies on the good will of one developer.
How can we improve that?
Which language?
I'd use a tool such as Valgrind, try to fully exercise the program and see what it reports.
first line of defense:
second line of defense:
If you work with unmanaged language (like C/C++) you can efficiently discover most of the memory leaks by hijacking memory management functions. For example you can track all memory allocations/deallocations.
It seems to me that the core of the problem is not so much finding memory leaks as knowing when to test for them. You say you have lots of user actions, but you don't say what sequences of user actions are meaningful. If you can generate meaningful sequences at random, I'd argue hard for random testing. On random tests you would measure
gcov
or valgrind
)valgrind
)By "coverage of user actions" I mean statements like the following:
If that's not true, then you can ask for what fraction of pairs A and B it is true.
If you have the CPU cycles to afford it, you would probably also benefit from running valgrind
or another memory-checking tool either before every commit to your source-code repository or during a nightly build.
Automate!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With