Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Time complexity and insertion into std::list

Insertion on std::list is claimed to be constant time, regardless whether it is made in the front, middle or back of the container.

On the other hand, acquisition of memory for the new inserted item is handled by the standard allocator, which uses operator new. AFAIK operator new is not guaranteed to have constant time.

When operator new looks for available space in the heap, it must be sure that it will not override previously allocated memory, therefore it has to keep track of what has already been allocated on the heap. I conclude that insertion must be at least linear on the number of elements already in the list.

What is wrong with this reasoning?


My question reads:

  • How is it possible to say that insertion on lists is constant time, when acquiring memory for each new node is not guaranteed to be constant time?
like image 481
mzimbres Avatar asked Mar 16 '14 18:03

mzimbres


1 Answers

Note: It is important to note the difference between "real life time" and the "time" talked about when diving into time complexity. When time complexity is the topic it's important that one doesn't confuse the usage of "time" with "milliseconds spent doing something".


WHAT IS THE DEFINITION OF constant time?

Wikipedia is often said to be a bad reference in many contexts, but in this case (and a lot of others) the definitions available is correct and will help to describe how things work.

The article about time complexity says the following about constant time:

Wikipedia - Constant Time

An algorithm is said to be constant time (also written as O(1) time) if the value of T(n) is bounded by a value that does not depend on the size of the input. For example, accessing any single element in an array takes constant time as only one operation has to be performed to locate it.


Since the insertion into a std::list does not depend on the number of elements in the list we say that insertion is constant time; each insertion, no matter where or when, consists of the same number of elementary operations; none related to the size of the list.



But what if operator new is not O(1)?

Honestly it doesn't matter, even if the complexity of new would implicitly depend on how many previous entities we have allocated, the complexity of our list-insertion will be unchanged. The allocation is indifferent to the size of the list.

O(1), constant time, means that the time to execute something is unrelated to the size of input in any given algorithm. Even if new isn't O(1), our insertation is O(1) since it only describes itself.

The paths taken inside our list inseration all includes operator new. The path doesn't change because of the size of our list, the path's complexity is constant time.



So what are we dealing with?

Hannibal_Smith in ##c++ at freenode said something clever, and I liked it so much that I will include it in this post: The cost model is a pointer machine.

Even though the sentence might be a little bit misleading it does serve the purpose of explaining how the insertion is O(1) even though parts of the algorithm isn't constant time.

The insertion into a std::list is described from the perspective of being a machine who only deals with pointers, and from this perspective one cannot say that it is nothing other than O(1). The allocation done inside this algorithm is not related to the complexity of the algorithm itself.

like image 82
Filip Roséen - refp Avatar answered Sep 28 '22 21:09

Filip Roséen - refp