Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why isn't every algorithm O(1)?

If we have a random array of integers with size n such that we need to calculate the summation.

Claim: The best algorithm does that in O(n)

But, I claim we can do this in O(1). why?

We know for sure that n is locked in some field (since it's int and int is finite) which means I can sum all elements in less than 2,147,483,647 steps?

like image 566
Ariel Avatar asked Feb 11 '21 10:02

Ariel


People also ask

What is an O 1 algorithm?

O(1) describes algorithms that take the same amount of time to compute regardless of the input size. For instance, if a function takes the same time to process ten elements and 1 million items, then we say that it has a constant growth rate or O(1) .

Is O 1 N possible?

No, this is not possible: As n tends to infinity in 1/n we eventually achieve 1/(inf), which is effectively 0. Thus, the big-oh class of the problem would be O(0) with a massive n, but closer to constant time with a low n.

What can be said of an algorithm whose runtime may be described as O 1?

Constant Time: O(1) An algorithm is said to run in constant time if it requires the same amount of time regardless of the input size.

Is O 1 time algorithm the fastest?

Runtime Analysis of AlgorithmsThe fastest possible running time for any algorithm is O(1), commonly referred to as Constant Running Time. In this case, the algorithm always takes the same amount of time to execute, regardless of the input size.

Is it possible to construct an algorithm that is O(1/n)?

It may be possible to construct an algorithm that is O (1/n). One example would be a loop that iterates some multiple of f (n)-n times where f (n) is some function whose value is guaranteed to be greater than n and the limit of f (n)-n as n approaches infinity is zero. The calculation of f (n) would also need to be constant for all n.

Does finding an algorithm that works in less running time?

At the end of this topic, we can conclude that finding an algorithm that works in less running time and also having less requirement of memory space, can make a huge difference in how well an algorithm performs. Writing code in comment? Please use ide.geeksforgeeks.org , generate link and share the link here.

What is the difference between log(n) and 1/n time algorithms?

And many Log (N) algorithms do exist. One such example is finding a value in a binary tree. However, these are still different than 1/N, Since Log (N) is always increasing, while 1/n is a decreasing function. Looking at definition, sublinear time algorithm is any algorithm whose time grows slower than size N.

Is it possible for a function to be O(1/n) at runtime?

In practice you are correct, if you do construct a function with 1/n runtime (easy) it will eventually hit the some minimum time, effectively making it an O (1) algorithm when implemented. There is nothing to stop the algorithm from being O (1/n) on paper though.


Video Answer


5 Answers

Why isn't every algorithm O(1)?

Because we choose to ignore the limitations (i.e. assume resources to be unlimited) of concrete computers when we analyse complexity of algorithms.

Asymptotic complexity gives us useful information about how the complexity grows. Merely concluding that there is a constant limit because of the hardware and ignoring how the limit is reached doesn't give us valuable information.


Besides, the limit that you perceive is much, much higher in reality. Computers aren't limited to representing at most 2'147'483'647 integer values. Using complex data-structures, computers can represent arbitrarily large numers - until memory runs out... but memory can be streamed from the disk. And there are data centers that easily provide hundreds of Tera Bytes of storage.

Although, to be fair: If we allow arbitrary length integers, then the complexity of the sum is worse than linear because even a single addition has linear complexity.


Once we take concrete hardware into consideration, it makes more sense to choose to use concrete units in the analysis: How many seconds does the program take for some concrete input on this particular hardware? And the way to find the answer is not math, but concrete measurement.

like image 192
eerorika Avatar answered Oct 19 '22 09:10

eerorika


Why isn't every algorithm O(1)?

TL;DR: Because the Big O notation is used to quantify an algorithm, with regards of how it behaves with an increment of its input.

Informally, you can think about it as a framework that humans have invented to quantify classes of algorithms. If such framework would yield for every algorithm O(1), it would have defeated its own purpose in the first place i.e, quantify classes of algorithms.

More detailed answer

Let us start by clarifying what is Big O notation in the current context. From (source) one can read:

Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. (..) In computer science, big O notation is used to classify algorithms according to how their run time or space requirements grow as the input size grows.

The following statement is not accurate:

But, I claim we can do this in O(1). why? we know for sure that n is locked in some field (since it's int and int is finite) which means I can sum all elements in less than 2,147,483,647 steps?

One cannot simply perform "O(2,147,483,647)" and claim that "O(1)" since the Big O notation does not represent a function but rather a set of functions with a certain asymptotic upper-bound; as one can read from source:

Big O notation characterizes functions according to their growth rates: different functions with the same growth rate may be represented using the same O notation.

Informally, in computer-science time-complexity and space-complexity theories, one can think of the Big O notation as a categorization of algorithms with a certain worst-case scenario concerning time and space, respectively. For instance, O(n):

An algorithm is said to take linear time/space, or O(n) time/space, if its time/space complexity is O(n). Informally, this means that the running time/space increases at most linearly with the size of the input (source).

So the complexity is O(n) because with an increment of the input the complexity grows linear and not constant.

like image 23
dreamcrash Avatar answered Oct 19 '22 10:10

dreamcrash


A theory where all algorithms have a complexity O(1) would be of little use, acknowledge that.

In the theory of complexity, N is an unbounded variable so that you do obtain a non-trivial asymptotic bound.

For practical algorithms, the asymptotic complexity is useful in that is does model the exact complexity for moderate values of N (well below the largest int).


In some pathological cases (such as the most sophisticated matrix multiplication algorithms), N must exceed the capacity of an int before the asymptotic behavior becomes beneficial.


Side remark:

I recall of a paper claiming O(1) time for a certain operation. That operation was involving an histogram of pixel values, the range of which is indeed constant. But this was a misleading trick, because the hidden constant was proportional to 256 for an 8 bits image, and 65536 for 16 bits, making the algorithm dead slow. Claiming O(H) where H is the number of bins would have been more informative and more honest.

like image 23
Yves Daoust Avatar answered Oct 19 '22 09:10

Yves Daoust


You chooses what is N and in what runtime dependency you are interested. You can say:

"Well computers have limited ressources, so I simply consider those limits as constant factors and now all my algorithms are O(1)"

However, this view is of limited use, because we want to use complexity to classify algorithms to see which performs better and which worse, and a classification that puts all algorithms in the same bucket does not help with that.

like image 23
463035818_is_not_a_number Avatar answered Oct 19 '22 11:10

463035818_is_not_a_number


Integers are countable infinite by definition. As such you can not proof termination on basis of n. If you redefine integers as an bounded interval of countable numbers you can claim O(1) if and only if n is such an integer literal.

IMO: The useful part of O-notation is the information about time complexity in relation to input. In a scenario where my input is bounded I just focus on the behavior within the bounds. And that is O(n) in this case. You can claim O(1) but this strips it of information.

like image 1
lopho Avatar answered Oct 19 '22 09:10

lopho