Trying to brush up on my Big-O understanding for a test (A very basic Big-O understanding required obviously) I have coming up and was doing some practice problems in my book.
They gave me the following snippet
public static void swap(int[] a)
{
int i = 0;
int j = a.length-1;
while (i < j)
{
int temp = a[i];
a[i] = a[j];
a[j] = temp;
i++;
j--;
}
}
Pretty easy to understand I think. It has two iterators each covering half the array with a fixed amount of work (which I think clocks them both at O(n/2))
Therefore O(n/2) + O(n/2) = O(2n/2) = O(n)
Now please forgive as this is my current understanding and that was my attempt at the solution to the problem. I have found many examples of big-o online but none that are quite like this where the iterators both increment and modify the array at basically the same time.
The fact that it has one loop is making me think it's O(n) anyway.
Would anyone mind clearing this up for me?
Thanks
This simple example has two 'for' loops, and for each element 'N' it will execute 'N' times. So in total it will execute N^2 times. In big O notation we would say that this algorithm has a complexity of O(N^2).
Each iteration in the while loop, either one or both indexes move toward each other. In the worst case, only one index moves toward each other at any time. The loop iterates n-1 times, but the time complexity of the entire algorithm is O(n log n) due to sorting.
Big O Notation is the mathematical expression of how long an algorithm takes to run depending on how long is the input, usually talking about the worst case scenario.
f(n) = n(n/2) + n/2 = (n^2)/2 + n/2 = (n^2 + n)/2... this is exactly the complexity of these nested loop.
The fact that it has one loop is making me think it's O(n) anyway.
This is correct. Not because it is making one loop, but because it is one loop that depends on the size of the array by a constant factor: the big-O notation ignores any constant factor. O(n)
means that the only influence on the algorithm is based on the size of the array. That it actually takes half that time, does not matter for big-O.
In other words: if your algorithm takes time n+X
, Xn
, Xn + Y
will all come down to big-O O(n)
.
It gets different if the size of the loop is changed other than a constant factor, but as a logarithmic or exponential function of n
, for instance if size is 100
and loop is 2
, size is 1000
and loop is 3
, size is 10000
and loop is 4
. In that case, it would be, for instance, O(log(n))
.
It would also be different if the loop is independent of size. I.e., if you would always loop 100
times, regardless of loop size, your algorithm would be O(1)
(i.e., operate in some constant time).
I was also wondering if the equation I came up with to get there was somewhere in the ballpark of being correct.
Yes. In fact, if your equation ends up being some form of n * C + Y
, where C
is some constant and Y
is some other value, the result is O(n)
, regardless of whether see is greater than 1
, or smaller than 1
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With