Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How can one test time complexity "experimentally"?

Could it be done by keeping a counter to see how many iterations an algorithm goes through, or does the time duration need to be recorded?

like image 364
Donald Taylor Avatar asked Oct 20 '10 19:10

Donald Taylor


People also ask

How can we measure time complexity?

In general, you can determine the time complexity by analyzing the program's statements (go line by line). However, you have to be mindful how are the statements arranged. Suppose they are inside a loop or have function calls or even recursion. All these factors affect the runtime of your code.

How is complexity analysis measured?

Complexity is measured in two dimensions: time (how long a function takes to complete), and space (how much memory a function consumes while executing). So a simple way to think about a function's complexity is to consider the number of things it does or creates (x) as multiplied by the size of the input (n).

How do you measure complexity of an algorithm explain with an example?

The complexity is written as O(<some function>), meaning that the number of operations is proportional to the given function multiplied by some constant factor. For example, if an algorithm takes 2*(n**2) operations, the complexity is written as O(n**2), dropping the constant multiplier of 2.

What is time complexity test case?

Time complexity tests verify the order of growth of time complexity T(n) for various operations, generally verifying that this is O(1) or O(n), rather than O(n2), say.


4 Answers

The currently accepted won't give you any theoretical estimation, unless you are somehow able to fit the experimentally measured times with a function that approximates them. This answer gives you a manual technique to do that and fills that gap.

You start by guessing the theoretical complexity function of the algorithm. You also experimentally measure the actual complexity (number of operations, time, or whatever you find practical), for increasingly larger problems.

For example, say you guess an algorithm is quadratic. Measure (Say) the time, and compute the ratio of time to your guessed function (n^2):

for n = 5 to 10000 //n: problem size
  long start = System.time()
  executeAlgorithm(n)
  long end = System.time()
  long totalTime = end - start

  double ratio = (double) time / (n * n)
end

. As n moves towards infinity, this ratio...

  • Converges to zero? Then your guess is too low. Repeat with something bigger (e.g. n^3)
  • Diverges to infinity? Then your guess is too high. Repeat with something smaller (e.g. nlogn)
  • Converges to a positive constant? Bingo! Your guess is on the money (at least approximates the theoretical complexity for as large n values as you tried)

Basically that uses the definition of big O notation, that f(x) = O(g(x)) <=> f(x) < c * g(x) - f(x) is the actual cost of your algorithm, g(x) is the guess you put, and c is a constant. So basically you try to experimentally find the limit of f(x)/g(x); if your guess hits the real complexity, this ratio will estimate the constant c.

like image 70
Dimitris Andreou Avatar answered Nov 09 '22 23:11

Dimitris Andreou


Algorithm complexity is defined as (something like:)

the number of operations the algorithm does as a function of its input size.

So you need to try your algorithm with various input sizes (i.e. for sort - try sorting 10 elements, 100 elements etc.), and count each operation (e.g. assignment, increment, mathematical operation etc.) the algorithm does.

This will give you a good "theoretical" estimation.
If you want real-life numbers on the other hand - use profiling.

like image 31
Oren A Avatar answered Nov 10 '22 00:11

Oren A


As others have mentioned, the theoretical time complexity is a function of number of cpu operations done by your algorithm. In general processor time should be a good approximation for that modulo a constant. But the real run time may vary because of a number of reasons such as:

  1. processor pipeline flushes
  2. Cache misses
  3. Garbage collection
  4. Other processes on the machine

Unless your code is systematically causing some of these things to happen, with enough number of statistical samples, you should have a fairly good idea of the time complexity of your algorithm, based on observed runtime.

like image 24
Amit Prakash Avatar answered Nov 10 '22 00:11

Amit Prakash


The best way would be to actually count the number of "operations" performed by your algorithm. The definition of "operation" can vary: for an algorithm such as quicksort, it could be the number of comparisons of two numbers.

You could measure the time taken by your program to get a rough estimate, but various factors could cause this value to differ from the actual mathematical complexity.

like image 23
casablanca Avatar answered Nov 09 '22 23:11

casablanca