We did an exercise in class today dealing with big-O notation. Here is one of the problems:
void modifyArray(int a[], int size)
{
int max = a[0];
for (int i = 1; i < size / 2; ++i)
{
if (max < a[i])
max = a[i];
}
for (int j = 1; j <= size * size; ++j)
{
++max;
cout << max;
}
}
My intuition tells me that f(n) = n/2 + n2 = O(n2) but according to my professor the answer is simply O(n). Could anyone explain to me why and when we just change what we consider to be the input size?
I understand that it is not a nested loop -- that is not what is confusing me. I don't understand why for a given input size
, the second loop is only considered to be O(n). The only way I can make sense of this is if we isolate the second loop and then redefine the input size to simply being n = size^2. Am I on the right track?
There are different asymptotic notations in which the time complexities of algorithms are measured. Here, the ”O”(Big O) notation is used to get the time complexities. Time complexity estimates the time to run an algorithm. It's calculated by counting the elementary operations.
Big O Notation is a way to measure an algorithm's efficiency. It measures the time it takes to run your function as the input grows. Or in other words, how well does the function scale. There are two parts to measuring efficiency — time complexity and space complexity.
In a formal big-O proof, you first choose values for k and c, then show that 0 ≤ f(x) ≤ cg(x) for every x ≥ k. So the example from the previous section would look like: Claim 51 3x2 + 7x + 2 is O(x2). Proof: Consider c = 4 and k = 100.
If the code you present is exactly the code your professor is commenting on, then (s)he's wrong. As written, it outputs each number from 1 to size * size
, which is definitely O(n^2), as n = size is the sane choice.
Yes, you're right to think you could say something like "O(n) where n is the square of the array size", but that's complication without purpose.
As others have said, if the cout << max
is removed, the compiler may optimise out the loop to a single O(1) assignment, meaning the function's other O(n) operation dictates the overall big-O efficiency, but it may not - who said you're even enabling optimisation? The best way to to describe the big-O efficiency is therefore to say "if optimisation kicks in then O(n) else O(n^2)" - it's not useful to assert one or the other then hide your assumptions, and the consequences if they're wrong, in a footnote.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With