Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Unit tests to verify time complexity [closed]

Does anyone use unit tests to verify the time/space complexity of code?

like image 943
Rocketmagnet Avatar asked Jan 28 '09 10:01

Rocketmagnet


3 Answers

That's a perfectly good point you bring up. Sure you use Unit test for this.

A unit test is mainly a "way" of testing the outcome of your code. You test if it does what it is supposed to do and that it fails when you want it to fail.

Time and Space is two variables that are highly important and you might "want" fast speed and low space cost but the program actually does the opposite, then you got a bug, this is what the unit test are for, to find bugs and solve them.

Some Pseudo Code of a Unit test for time Consuming, you might know how to solve this, but this is a fairly nice way of testing it:

Unit_Test_To_See_If_X_Takes_More_Than_Y_Seconds(int max_milli_seconds)
{
    int current_millis = getMillis();

    do_operations_on_objects_and_functions();

    int millis_after_executions = getMillis();

    int elapes_millis = millis_after_execution - current_millis;

    if ( elapsed_millis > max_milli_seconds )
      Assert(ERROR);

}

Also when you think about it, can you have too many tests? No you can't. It's good to test for all outcomes, even if you test for "stupid" things, if you don't test for an outcome and a bug is evolved, does it mean that it doesn't exist just because you didn't see it or you didn't test for it? :)

like image 52
Filip Ekberg Avatar answered Nov 18 '22 11:11

Filip Ekberg


Possible solution would be to test with Filip's function with 1 iteration, then 100, then 1000, then 10000 (etc) and plot the results on a graph to determine the time complexity. Or use maths to derive the highest factor difference between each run. I'm not sure how you'd test space however.

I don't think the output would be a simple pass or fail unless there's an existing algorithm to derive the highest factor (N^2 etc) and compare against that.

like image 27
Josh Smeaton Avatar answered Nov 18 '22 10:11

Josh Smeaton


I had a similar problem when i was tuning an algorithm. What I did was to loop it and then divided by the number of loops and the big-O value. Then, I called this "q" or "quality value". Interestingly, the value was more or less constant as I expected, and a bit higher for bad parameters. The point is, if you look at the definition of big-O, that you get a measurement of the "less dominant" calculations beside the main one. They are often constant or have lower complexity. In my case, it was a bit linear but still nothing to think about.

My suggestion is, to calculate such quality-values. Then you can see if there is a sudden raise which indicates a side effect of your last changes that breaks your complexity assumption.

like image 1
Harald Schilly Avatar answered Nov 18 '22 10:11

Harald Schilly