Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is this technically an O(1) algorithm for "Hello World"?

People also ask

What is the time complexity for hello world function?

Time Complexity: In the above code “Hello World” is printed only once on the screen. So, the time complexity is constant: O(1) i.e. every time a constant amount of time is required to execute code, no matter which operating system or which machine configurations you are using.

What is O1 programming?

In short, O(1) means that it takes a constant time, like 14 nanoseconds, or three minutes no matter the amount of data in the set. O(n) means it takes an amount of time linear with the size of the set, so a set twice the size will take twice the time.

What term is used to describe an O 1 algorithm?

A linear term is used to describe an o(n) algorithm.

What is O notation in algorithm?

Big O Notation is a way to measure an algorithm's efficiency. It measures the time it takes to run your function as the input grows. Or in other words, how well does the function scale. There are two parts to measuring efficiency — time complexity and space complexity.


Big O notation in this context is being used to describe a relationship between the size of the input of a function and the number of operations that need to be performed to compute the result for that input.

Your operation has no input that the output can be related to, so using Big O notation is nonsensical. The time that the operation takes is independent of the inputs of the operation (which is...none). Since there is no relationship between the input and the number of operations performed, you can't use Big O to describe that non-existent relationship


Big-O notation means roughly 'given an operation on an amount of work, N, how much calculation time, proportional to N, does the algorithm take?'. Eg, sorting an array of size N can take N^2, Nlog(N), etc.

This has no amount of input data to act on. So it's not O(anything).

Even worse; this isn't technically an algorithm. An algorithm is a method for computing the value of a mathematical function -- maths functions are a mapping from one input to an output. Since this takes no input and returns nothing, it's not a function, in the mathematical sense. From wikipedia:

An algorithm is an effective method that can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state.

What this is, technically, is a control system. From wikipedia;

A control system is a device, or set of devices, that manages, commands, directs or regulates the behavior of other devices or systems.

For people wanting a more in-depth answer about the difference between mathematical functions and algorithms, and the more powerful abilities of computers to do side-effecting things like console output, displaying graphics, or controlling robots, have a read of this paper on the Strong Church-Turing Hypothesis

Abstract

The classical view of computing positions computation as a closed-box transformation of inputs (rational numbers or finite strings) to outputs. According to the interactive view of computing, computation is an ongoing interactive process rather than a function-based transformation of an input to an output. Specifically, communication with the outside world happens during the computation, not before or after it. This approach radically changes our understanding of what is computation and how it is modeled.

The acceptance of interaction as a new paradigm is hindered by the Strong Church-Turing Thesis (SCT), the widespread belief that Turing Machines (TMs) capture all computation, so models of computation more expressive than TMs are impossible. In this paper, we show that SCT reinterprets the original Church-Turing Thesis (CTT) in a way that Turing never intended; its commonly assumed equivalence to the original is a myth. We identify and analyze the historical reasons for the widespread belief in SCT. Only by accepting that it is false can we begin to adopt interaction as an alternative paradigm of computation


No, your code has time complexity of O(2^|<DeltaTime>|),

For a proper coding of the current time.
Please, let me first apologize for my English.

What is and how Big O works in CS

Big O notation is not used to tie the input of a program with its running time.
Big O notation is, leaving rigor behind, a way to express the asymptotic ratio of two quantities.

In the case of algorithm analysis these two quantities are not the input (for which one must first have a "measure" function) and the running time.
They are the length of the coding of an instance of the problem1 and a metric of interest.

The commonly used metrics are

  1. The number of steps required to complete the algorithm in a given model of computation.
  2. The space required, if any such concept exists, by the model of computation.

Implicitly is assumed a TM as the model so that the first point translates to the number of applications of the transition2 function, i.e. "steps", and the second one translates the number of different tape cells written at least once.

Is it also often implicitly assumed that we can use a polynomially related encoding instead of the original one, for example a function that search an array from start to end has O(n) complexity despite the fact that a coding of an instance of such array should have length of n*b+(n-1) where b is the (constant) number of symbols of each element. This is because b is considered a constant of the computation model and so the expression above and n are asymptotically the same.

This also explains why an algorithm like the Trial Division is an exponential algorithm despite essentially being a for(i=2; i<=sqr(N); i++) like algorithm3.

See this.

This also means that big O notation may use as many parameters one may needs to describe the problem, is it not unusual to have a k parameter for some algorithms.

So this is not about the "input" or that "there is no input".

Study case now

Big O notation doesn't question your algorithm, it just assumes that you know what you are doing. It is essentially a tool applicable everywhere, even to algorithm which may be deliberately tricky (like yours).

To solve your problem you used the current date and a future date, so they must be part of the problem somehow; simply put: they are part of the instance of the problem.

Specifically the instance is:

<DeltaTime>

Where the <> means any, non pathological, coding of choice.

See below for very important clarifications.

So your big O complexity time is just O(2^|<DeltaTime>|), because you do a number of iteration that depends on the value of current time. There is no point in putting other numeric constants as the asymptotic notation is useful as it eliminates constants (so for example the use of O(10^|<DeltaTime>|*any_time_unit) is pointless).

Where the tricky part is

We made one important assumption above: that the model of computation reificates5 time, and by time I mean the (real?) physical time. There is no such concept in the standard computational model, a TM does not know time, we link time with the number of steps because this is how our reality work4.

In your model however time is part of the computation, you may use the terminology of functional people by saying that Main is not pure but the concept is the same.

To understand this one should note that nothing prevent the Framework to using a fake time that run twice, five, ten times faster that physical time. This way your code will run in "half", "one fifth", "one tenth" of the "time".

This reflection is important for choosing the encoding of <DeltaTime>, this is essentially a condensed way of writing <(CurrentTime, TimeInFuture)>. Since time does not exist at priory, the coding of CurrentTime could very well be the word Now (or any other choice) the day before could be coded as Yesterday, there by breaking the assumption that the length of the coding increase as the physical time goes forward (and the one of DeltaTime decreases)

We have to properly model time in our computational model in order to do something useful.

The only safe choice we can do is to encode timestamps with increasing lengths (but still not using unary) as the physical time steps forward. This is the only true property of time we need and the one the encoding needs to catch. Is it only with this type of encoding that your algorithm maybe given a time complexity.

Your confusion, if any, arise from the fact that the word time in the phrases 'What is its time complexity?' and 'How much time will it take?' means to very very different things

Alas the terminology use the same words, but you can try using "steps complexity" in your head and re-ask yourself your question, I hope that will help you understand the answer really is ^_^


1 This also explains the need of an asymptotic approach as each instance has a different, yet not arbitrary, length.
2 I hope I'm using the correct English term here.
3 Also this is why we often find log(log(n)) terms in the math.
4 Id est, a step must occupy some finite, but not null, nor not connected, interval of time.
5 This means that the computational mode as a knowledge of physical time in it, that is can express it with its terms. An analogy are how generics work in the .NET framework.


Although there are a bunch of great answers here, let me rephrase all of them a bit.

Big-O notation exists to describe functions. When applied to analysis of algorithms this requires us to first define some characteristic of this algorithm in terms of a function. The common choice is considering number of steps as a function of input size. As noted in other answers, coming up with such function in your case seems strange, because there is no clearly defined "input". We can still try to do it, though:

  • We can regard your algorithm as a constant function which takes any input of any size, ignores it, waits a fixed amount of time, and finishes. In this case its runtime is f(n) = const, and it is a O(1)-time algorithm. This is what you expected to hear, right? Yes, it is, technically, an O(1)-algorithm.
  • We can regard the TwentyYearsLater as the "input-size"-like parameter of interest. In this case the runtime is f(n) = (n-x) where x is the "now time" at the moment of invocation. When seen this way, it is a O(n)-time algorithm. Expect this counter-argument whenever you go showing your technically O(1)-algorithm to other people.
  • Oh, but wait, if k=TwentyYearsLater is the input, then its size n is, actually, the number of bits needed to represent it, i.e. n = log(k). The dependency between the size of the input n and runtime is therefore f(n) = 2^n - x. Seems like your algorithm has just become exponentially slow! Ugh.
  • Another input to the program is in fact the stream of answers given by the OS to the sequence of DateTime.Now invocations in the loop. We can actually imagine that this whole sequence is provided as input at the moment we run the program. The runtime can then be considered to depend on the property of this sequence - namely its length until the first TwentyYearsLater element. In this case the runtime is again f(n) = n and the algorithm is O(n).

But then again, in your question you did not even say you were interested in runtime. What if you meant memory use? Depending on how you model the situation you can say the algorithm is O(1)-memory or, perhaps, O(n)-memory (if the implementation of DateTime.Now requires to keep track of the whole invocation sequence somewhy).

And if your goal was to come up with something absurd, why don't you go all in and say that you are interested in how the size of the algorithm's code in pixels on the screen depends on the chosen zoom level. This might be something like f(zoom) = 1/zoom and you can proudly declare your algorithm to be O(1/n)-pixel size!