The Standard says:
A deque is a sequence container that supports random access iterators (27.2.7). In addition, it supports constant time insert and erase operations at the beginning or the end; insert and erase in the middle take linear time.
However, it also says in the same Clause:
All of the complexity requirements in this Clause are stated solely in terms of the number of operations on the contained objects. [ Example: The copy constructor of type
vector<vector<int>>
has linear complexity, even though the complexity of copying each containedvector<int>
is itself linear. — end example ]
Doesn't this mean that insertion at the beginning of, say, deque<int>
is allowed to take linear time as long as it doesn't perform more than a constant number of operations on the int
s that are already in the deque and the new int
object being inserted?
For example, suppose that we implement a deque using a "vector of size-K vectors". It seems that once every K times we insert at the beginning, a new size-K vector must be added at the beginning, so all other size-K vectors must be moved. This would mean the time complexity of insertion at the beginning is amortized O(N/K) where N is the total number of elements, but K is constant, so this is just O(N). But it seems that this is allowed by the Standard, because moving a size-K vector doesn't move any of its elements, and the "complexity requirements" are "stated solely in terms of the number of operations" on the contained int
objects.
Does the Standard really allow this? Or should we interpret it as having a stricter requirement, i.e. constant number of operations on the contained objects plus constant additional time?
Elements in a deque are not contiguous in memory; vector elements are guaranteed to be. So if you need to interact with a plain C library that needs contiguous arrays, or if you care (a lot) about spatial locality, then you might prefer vector .
A deque is generally implemented as a collection of memory blocks. These memory blocks contains the elements at contiguous locations. When we create a deque object it internally allocates a memory block to store the elements at contigious location.
Complexity. In a doubly-linked list implementation and assuming no allocation/deallocation overhead, the time complexity of all deque operations is O(1).
Deque can have additional memory overhead over vector because it's made of a few blocks instead of contiguous one.
For example, suppose that we implement a deque using a "vector of size-K vectors".
That wouldn't be a valid implementation. Insertion at the front of a vector
invalidates all of the pointers/references in the container. deque
is required to not invalidate any pointers/references on front insertion.
But let's ignore that for now.
But it seems that this is allowed by the Standard, because moving a size-K vector doesn't move any of its elements, and the "complexity requirements" are "stated solely in terms of the number of operations" on the contained int objects.
Yes, that would be allowed. Indeed, real implementations of deque
are not so dissimilar from that (though they don't use std::vector
itself for obvious reasons). The broad outline of a deque
implementation is an array of pointers to blocks (with space for growth at both the front and back), with each block containing up to X items as well as a pointers to the next/previous blocks (to make single-element iteration fast).
If you insert enough elements at the front or back, then the array of block pointers has to grow. That will require an operation that is linear time relative to the number of items in the deque
, but doesn't actually operate on the items themselves, so it doesn't count. The specification has nothing to say about the complexity of that operation.
Without this provision, I'm not sure if the unique feature-set of deque
(fast front/back insert and random-access) would be implementable.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With