In algorithm descirptions, I sometimes encounter time complexities that look like: O(n29/20+m7/3). I see where +
and numerators in powers come from: +
means successive loops, and numerators mean nested loops. For example this (useless) algorithm has O(n2+m) time complexity:
public int compute(int n, int m) {
int answer = 0;
for (int i=0; i<n; i++) {
for (int j=0; j<n; j++) {
answer += i-j;
}
}
for (int i=0; i<m; i++) {
answer -= i;
}
return answer;
}
But I don't understand what could introduce the denominators (20 and 3 in the first example).
They are coming from strict analysis of the complexity function.
One common example is for Matrix Multiplication, while the Naive solution is O(n^3)
multiply operations, there are some faster solutions.
One of the first improvements offered to use 7 (instead of 8) multiplications operations to multiply two 2X2 matrices.
If you invoke this recursively for all your submatrices you will see it will require O(n^log_2(7)) ~= O(n^2.807)
multiplications.
Another common example is the Fibinacci sequence using a Naive recursive solution:
F(x) = x > 2? F(x-1) + F(x-2) : 1
While you can definetly analyze it with more relaxed bounds and say that the above is O(2^n)
, in fact - closer analysis will show you that you only generate F(x)
stop clauses in order to calculate the value of F(x)
.
Since we know F(x) is in O(Phi^n)
, and using some basic algebra to show that the number of non stop clauses is a constant factor of the number of stop clauses, we can derive that this solution runs in O(Phi^n)~=O(1.62^n)
, which is a tighter bound.
For actual fractions, you can get them as well using root functions, where the most common is the squared root.
For example, the following is a Naive implementation to find if a number n
is prime or not:
for i from 2 to sqrt(n):
if n % i == 0:
return false
return true
As you can see, the above runs in O(sqrt(n)) = O(n^(1/2))
time.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With