What is the Big-O efficiency of a stack, queue, set, and deque as it pertains to insertion, search, indexing, space, and deletion complexities?
Big O notation(“O” stands for “order”) is the language we use in Computer Science to describe the performance of an algorithm. Now that we are familiar with time complexity in programming language, we can dive into how Stacks and Queues work, differences between both of them, and their Big O notation.
Complexity. In a doubly-linked list implementation and assuming no allocation/deallocation overhead, the time complexity of all deque operations is O(1).
In a nutshell, stacks and queues follow the principle of first-in-last-out (stacks) and first-in-first-out (queues). However, for out-of-the-box JavaScript array methods, the time complexity for stacks is O(1) and the time complexity for queues is O(n).
Also, deque supports range-based for loop iteration, which is unavailable for stack and queue(both of them require to create temporary copy for iteration). Thereby deque supports all operations much more faster than both stack and queue.
This really depends on the implementation of each data structure. I'll go over a few, so you can get the idea of how to determine this.
Let's assume the Stack class is implemented using a Node
(it can be implemented with an LinkedList
, ArrayList
, arrray
, etc. as well).
Node top, bottom;
public Stack(Node n){
top = bottom = n;
}
Stack has 3 primary methods: peek
, push
, and pop
.
public int peek(){
return top.value; //only return value
}
There wasn't much processing involved. It just returned a primitive value. This is O(1) for both time and space.
public void push(Node n){
n.next = top;
top = n;
}
Still no real processing. This is O(1) for both time and space. Let's skip pop()
and try something more interesting. Let's try a method called contains(int v)
. It will search the stack to see if it contains a Node
which contains a value equal to v
.
public bool contains(int v){
Node current = top;
while(current != null){
if(current.value == v){
return true;
}
current = current.next;
}
return false;
}
Basically we'll move through the Node references until we find the value or we reach the end. In some cases, you'll find value early and in some cases later down the road. However, we care about the worst case! The worst possible case would be we have to check every single Node
. Say there are n Nodes, then we have O(n).
You can apply these analysis skills to the other data structures, so you can solve the rest yourself. It's not too bad. Good luck. :)
I've been using this: http://bigocheatsheet.com/ , but there is virtually no information on WHY those big O values are used. You have to dig in and research it for yourself.
I'm surprised you couldn't find this information online.
There's a distinction to be made between the data structures you listed in the question.
I'll start with the queue and stack data structures. Both the stack and the queue offer specialized access to the data in that there is no random access, only sequential. Therefore you can't talk about random access performance. In that case any decent implementation of a stack or a queue will offer O(1) access for both insert and remove operations (in their respective form).
A set is a very different structure and its performance will depend heavily on the underlying implementation. For instance you can implement a set using an underlying hash table for near constant time insert, remove, and find operations, or you can implement it using a balanced search tree for O(log n).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With