I have read that time complexity of adding items to end of a std::vector
is amortized constant and inserting items at the top and bottom of a std::deque
is constant.Since both these containers have a random access iterator thus accessing elements at any index is constant. Please let me know if I have any of these facts wrong.My question is if accessing an element in a std::vector
or std::deque
is constant then why is the time complexity of removing an element via erase O(n). One of the answers here here states that removing elements via erase is O(n). I know that erase removes the elements between the starting iterators and the ending one so does the answer basically mean that its O(n)
depending on the no of elements between the two iterators and that removing a single element from a vector/deque in any index will be zero?
Erasing an element in a vector is O(n) as we have to remove the element and still need to shift all successive elements to fill the gap created. If a vector has n elements, then in the worst case we will need to shift n-1 elemets, hence the complexity is O(n).
The deque is a bit slower than the vector, that is logical because here there are more cache misses due to the segmented parts. The performance of the list are still poor but the gap is decreasing. The interesting point is that deque is now faster than vector.
The time complexity to find an element in std::vector by linear search is O(N).
push_back(): Inserts a new element at the end of the vector. Its time complexity is O(1).
The things are a bit different for std::vector
and std::deque
, as well as they are different for C++98 and C++11.
The complexity of std::vector::erase()
is linear both to the length of the range erased and to the number of elements between the end of the range and the end of the container (so erasing an element from the end takes constant time).
C++2003 [lib.vector.modifiers]
reads:
iterator erase(iterator position);
iterator erase(iterator first, iterator last);`
...
Complexity: The destructor of
T
is called the number of times equal to the number of the elements erased, but the assignment operator ofT
is called the number of times equal to the number of elements in the vector after the erased elements.
C++14 draft N4140 [vector.modifiers]
reads:
Complexity: The destructor of
T
is called the number of times equal to the number of the elements erased, but the move assignment operator ofT
is called the number of times equal to the number of elements in the vector after the erased elements.
So you see that C++11/14 implementation is more efficient in general since it perform move assignment instead of copy assignment, but the complexity remains the same.
The complexity of std::deque::erase()
is linear both to the length of the range erased and to the minimum of two numbers: number of remaining elements before the start of the range, and number of remaining elements after the end of the range. So, erasing an element either from the beginning or from the end takes constant time.
C++2003 [lib.deque.modifiers]
:
iterator erase(iterator position);
iterator erase(iterator first, iterator last);
Complexity: The number of calls to the destructor is the same as the number of elements erased, but the number of the calls to the assignment operator is at most equal to the minimum of the number of elements before the erased elements and the number of elements after the erased elements.
C++14 draft N4140 [deque.modifiers]/5
:
Complexity: The number of calls to the destructor is the same as the number of elements erased, but the number of calls to the assignment operator is no more than the lesser of the number of elements before the erased elements and the number of elements after the erased elements.
So, it's the same in C++98 and C++11/14, again except that C++11 can choose between move assignment and copy assignment (here I see some inconsistency in the standard because the wording doesn't mention move assignment like for std::vector
- might be a reason for another question).
Note also the "at most" and "no more" in the wordings. This allows for implementations to be more efficient than linear, though in practice they are linear (DEMO).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With