I'm wondering if its possible to express the time complexity of an algorithm that relies on convergence using Big O notation.
In most algorithmic analysis I've seen, we evaluate our function's rate of growth based on input size.
In the case of an algorithm that has some convergence criteria (where we repeat an operation until some defined error metric is below a threshold, or the rate at which the error metric is changing is below some threshold), how can we measure the time complexity? The number of iterations required to converge and exit that loop seems difficult to reason about since the way an algorithm converges tends to be dependent on the content of the input rather than just it's size.
How can we represent the time complexity of an algorithm that relies on convergence in Big O notation?
In order to analyse an algorithm that relies on convergence, it seems that we have to prove something about the rate of convergence.
Convergence usually has a termination condition that checks if our error metric is below some threshold:
do {
// some operation with time complexity O(N)
} while (errorMetric > 0.01) // if this is false, we've reached convergence
Generally, we seek to define something about the algorithm's manner of convergence - usually by identifying that its a function of something.
For instance, we might be able to show that an algorithm's measure of error is a function of the number of iterations so that the error = 1 / 2^i, where i is the number of iterations.
This can be re-written in terms of the number of iterations like so: iterations = log(1 / E), where E is the desired error value.
Therefore, if we have an algorithm that performs some linear operation on each iteration of the convergence loop (as in the example above), we can surmise that our time complexity is O(N * log(1 / E)). Our function's rate of growth is dependent on the amount of error we're willing to tolerate, in addition to the input size.
So, if we're able to determine some property about the behaviour of convergence, such as if its a function of the error, or size of the input, then we can perform asymptotic analysis.
Take, for example, PageRank, an algorithm called power iteration is used in its computation, which is an algorithm that approximates the dominant eigenvector of a matrix. It seems possible that the rate of convergence can be shown to be a function of the first two eigenvalues (shown in the link).
Asymptotic notations don't rely on convergence.
According to CLRS book (Introduction to Algorithms Third Edition chapter 3 page 43):
When we look at input sizes large enough to make only the order of growth of the running time relevant, we are studying the asymptotic efficiency of algorithms.That is, we are concerned with how the running time of an algorithm increases with he size of the input in the limit, as the size of the input increases without bound. Usually, an algorithm that is asymptotically more efficient will be the best choice for all but very small inputs.
You mentioned your code (or idea) has infinitive loop and continue to satisfy the condition and you named satisfying the condition convergence but in this meaning, convergence does not related to asymptotic notations like big O
, because it must finish because a necessary condition for a code to be an algorithm is that it's iterations must finish. You need to make sure iterations of your code finish, so you can tell it the algorithm and can asymptotic analysis of it.
Another thing is it's true maybe sometime a result has more running time but another has less running time. It's not about asymptotic analysis. It's best case, worst case. We can show analyse of algorithms in best case or worst case by big O
or other asymptotic notations. The most reliable of them is you analyse your algorithm in worst case. Finally, for analysis your code you should describe the step of your algorithm exactly.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With