Does anyone use unit tests to verify the time/space complexity of code?
That's a perfectly good point you bring up. Sure you use Unit test for this.
A unit test is mainly a "way" of testing the outcome of your code. You test if it does what it is supposed to do and that it fails when you want it to fail.
Time and Space is two variables that are highly important and you might "want" fast speed and low space cost but the program actually does the opposite, then you got a bug, this is what the unit test are for, to find bugs and solve them.
Some Pseudo Code of a Unit test for time Consuming, you might know how to solve this, but this is a fairly nice way of testing it:
Unit_Test_To_See_If_X_Takes_More_Than_Y_Seconds(int max_milli_seconds)
{
int current_millis = getMillis();
do_operations_on_objects_and_functions();
int millis_after_executions = getMillis();
int elapes_millis = millis_after_execution - current_millis;
if ( elapsed_millis > max_milli_seconds )
Assert(ERROR);
}
Also when you think about it, can you have too many tests? No you can't. It's good to test for all outcomes, even if you test for "stupid" things, if you don't test for an outcome and a bug is evolved, does it mean that it doesn't exist just because you didn't see it or you didn't test for it? :)
Possible solution would be to test with Filip's function with 1 iteration, then 100, then 1000, then 10000 (etc) and plot the results on a graph to determine the time complexity. Or use maths to derive the highest factor difference between each run. I'm not sure how you'd test space however.
I don't think the output would be a simple pass or fail unless there's an existing algorithm to derive the highest factor (N^2 etc) and compare against that.
I had a similar problem when i was tuning an algorithm. What I did was to loop it and then divided by the number of loops and the big-O value. Then, I called this "q" or "quality value". Interestingly, the value was more or less constant as I expected, and a bit higher for bad parameters. The point is, if you look at the definition of big-O, that you get a measurement of the "less dominant" calculations beside the main one. They are often constant or have lower complexity. In my case, it was a bit linear but still nothing to think about.
My suggestion is, to calculate such quality-values. Then you can see if there is a sudden raise which indicates a side effect of your last changes that breaks your complexity assumption.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With