I work as a programmer, but have no computer science background, so recently I've been following along with the excellent MIT OpenCourseWare intro to Computer Science and Programming. In the course of which, the question is asked: "will any program written using only function definitions and calls, the basic arithmetic operators, assignment and conditionals run in constant time?"
I thought the answer was yes, since all of these operations seem pretty simple. But as you smart people probably already knew, the correct answer was no, apparently because of the potential for indefinite recursion.
It seems like I just don't understand the implications of "in constant time". The way I pictured the meaning, it sounded like it just meant that a simple operation would take a known amount of time to complete. I accept that recursion can lead to your program running forever, but aren't the individual operations still taking a finite and measurable amount of time each... even if there are now an infinite number of them? Or does the answer just mean that two infinitely recursive programs cannot validly be said to take the same amount of time to run?
If anyone can give me an authoritative definition of "in constant time", and the implications thereof, I'd be very grateful!
An algorithm is said to be constant time (also written as time) if the value of is bounded by a value that does not depend on the size of the input. For example, accessing any single element in an array takes constant time as only one operation has to be performed to locate it.
Constant Time — O(1) An algorithm is said to have a constant time when it is not dependent on the input data ( n ). No matter the size of the input data, the running time will always be the same. For example: if a > b: return True.
constant time means there is a hard bound how much time each op will take to perform. Linear time means the longer the ArrayList is (more object it contains) the longer time the op will take.
O(n) constant time can absolutely be faster than O(1) linear time. The reason is that constant-time operations are totally ignored in Big O, which is a measure of how fast an algorithm's complexity increases as input size n increases, and nothing else. It's a measure of growth rate, not running time.
Constant time effectively means you can give a constant upper bound to how long the program will take to run which isn't affected by any of the input parameters.
Compare that with, say, linear time (for some input n
- which will often actually be the size of the input data rather than a direct value) - which means that the upper bound of the time taken can be expressed as mn + k
for some values of m
and k
.
Note that it doesn't mean a program will take the same amount of time for any input data just because it runs in constant time. For example, consider this method:
int foo(int n)
{
if (n == 0)
{
return 0;
}
int j = n + 1;
int k = j * 2;
return k;
}
That's doing more work in the case where n
is non-zero than in the case where it's zero. However, it's still constant time - at most, it's going to do one comparison, one addition, and one multiplication.
Now compare that with a recursive function:
public int foo(int n)
{
if (n <= 1)
{
return 1;
}
return n * foo(n - 1);
}
This will recurse n
times - so it's linear in n
. You can get much worse than linear, however. Consider this method for computing a Fibonacci number:
public int fib(int n)
{
if (n == 0)
{
return 0;
}
if (n == 1)
{
return 1;
}
return fib(n - 2) + fib(n - 1);
}
That doesn't look much worse than the previous version - but this is now exponential (the upper bound is most easily expressed in terms as O(2n). It's still only using simple comparisons, addition, and function calls though.
"Constant time" means that the operation will execute in an amount of time (or memory space - that's another thing often measured) independent of the input size. Usually you pick a variable (let's use n
) to indicate the input size.
O(1)
- constant time - running time does not depend on n
O(n)
- linear time - running time is linearly proportional to n
O(n^2)
- quadratic time - running time is proportional to the square of n
These are just a few examples; the possibilities are endless. See the wiki article on complexity
Here's a few specific ways that a program composed of only the operations you mention could take various amounts of time:
int n = // some value
doSomething
doSomething
doSomething
Note how it is three somethings in length, independent of what n
is. O(1)
int n = // some value
def f(n):
if n == 0 return
doSomething
f(n-1)
f(n)
Now we run a something for each value 0..n (linear time, O(n)
)
And we can have a bit of fun -
int n = //some value
def f(n):
if n == 0 return
doSomething
f(n-1)
f(n-1)
What's the running time here? (i.e. how many somethings do we execute?) :)
"Constant time" has the same meaning as "O(1)", which doesn't mean that an algorithm runs in a fixed amount of time, it just means that it isn't proportional to the length/size/magnitude of the input. i.e., for any input, it can be computed in the same amount of time (even if that amount of time is really long).
In "constant time" generally means that the time it will take to compute the result is independent of the size of the input.
For example. Calculating the length of a list / vector in most managed languages is done in constant time no matter how large the list is. The size is stored as a separate field and is updated as the list grows and shrinks. But calculating the count is as simple as reading the field.
Calculating the size of a doubly linked list is very often not constant time. The list can often be mutated on both ends and hence there is no central place to store a count. Hence determining the length of the list necessitates visiting it and determining how many elements are in there. So as the list grows so does the time it takes to calculate the answer.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With