I've been applying for jobs and every time I hear questions about algorithm time/space complexity, I cringe and stumble. No matter how much I read, my brain seems to be programmed to not get any of it, and I think the reason is down to the fact I have a very low mathematical background due to skipping school. This may not be the usual S.O question, potentially even be removed due to being fundamentally about the maths, but at least I'm hoping I'll figure out where to go next with this question.
I don't know why job people would go into that, so here's just a few examples. The whole "complexity" thing is just to provide an indication of how much time (or memory) the algorithm uses.
Now, if you have an array with values, accessing the value at a given index is O(1) -- constant. It doesn't matter how many elements are the in array, if you have an index you can get the element directly.
If, on the other hand, you are looking for a specific value, you'll have no choice but to look at every element (at least until you find the one, but that doesn't matter for the complexity thing). Thus, searching in a random array is O(n): the runtime corresponds to the number of elements.
If, on the other hand, you have sorted array then you can do a "binary search", which would be O(log n). "Log n" is twos-logarithm, which is basically the inverse of 2^n. For example, 2^10 is 2*2*2*2...*2 10 times = 1024, and log2(1024) is 10. Thus, algorithms with O(log n) are generally considered pretty good: to find an element in a sorted array using binary search, if the array has up to 1024 elements then the binary search will only have to look at 10 of them to find any value. For 1025-2048 elements it would be 11 peeks at most, for 2049-4096 it would 12 and so on. So, adding more elements will only slowly increase the runtime.
Of course, things can get a lot worse. A trivial sorting algorithm tends to be O(n**2), meaning that it needs 2^2 = 4 "operations" for an array with just 2 elements, 3^2 = 9 if the array has 3, 4^2 = 16 if the array has 4 elements and so on. Pretty bad actually, considering that an array with just 1000 elements would already require 1000*1000 = 1 million compares to sort. This is called exponential growth, and of course it can get even worse: O(n^3), O(n^4) etc. Increasingly bad.
A "good" sorting algorithm is O(n*log n). Assuming an array with 1024 elements, this would be 1024*10=10240 compares --- much better than the 1 million we had before.
Just take these O(...) as indicators for the runtime behavior (or memory footprint if applied to memory). I did plug in real numbers to you can see how the numbers change, but those are not important, and usually these complexities are worst case. Nonetheless, by just looking at the numbers, "constant time" is obviously best, exponential is always bad because runtime (or memory use) skyrockets really fast.
EDIT: also, you are not really interested in constant factors; you don't usually see "O(2n)". This would still be "O(n)" -- the runtime relates directly to the number of elements.
To analyze the time/space complexity of an algorithm - a high school knowledge should be fine. I studied this in Uni. during my first semester and I was just fine.
The fields of interests for the basics are:
The above is true for analyzing complexity of algorithms. Calculating complexity of problems is much deeper field, which is still in research - theory of complexity. This requires extensive knowledge on set theory, theory of computation, advanced calculus, linear algebra and much more.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With