According to C++ standard std::deque is something like
std::vector<std::array<T, M> *>
If so, how is it possible that insertion or removal of elements at the end or beginning is constant O(1)? If the capacity of the vector exceeded and we insert something at the end or beginning there is no guarantee the the entire vector is not reallocated, so we have 0(N/M) that actually is 0(N), isn't it? (N is the size of the deque).
A deque is generally implemented as a collection of memory blocks. These memory blocks contains the elements at contiguous locations. When we create a deque object it internally allocates a memory block to store the elements at contigious location.
The math behind them not withstanding, std::deque has random access iterators, and as such any in-place sorting algorithm will be able to sort the data within using that functionality and not requiring O(N) additional storage. std::list doesn't provide random access iteration, and thus is not std::sort -able.
A deque is a sequence container that, like a vector (23.3. 6), supports random access iterators. In addition, it supports constant time insert and erase operations at the beginning or the end; insert and erase in the middle take linear time.
Complexity. In a doubly-linked list implementation and assuming no allocation/deallocation overhead, the time complexity of all deque operations is O(1).
If the capacity of the vector exceeded and we insert something at the end or beginning there is no guarantee the the entire vector is not reallocated, so we have 0(N/M) that actually is 0(N), isn't it? (N is the size of the deque).
Yes, but no elements need to be copied/moved to reallocate that vector, only the pointers to the arrays need to be moved to the new storage.
The standard states:
[container.requirements.general]
-2- All of the complexity requirements in this Clause are stated solely in terms of the number of operations on the contained objects.
So the fact there are N/M copies of pointers isn't counted in the O(1) complexity requirement. And since those operations on pointers are very cheap, the performance of those operations is not a problem in practice. The deque
needs to allocate a new page for every M elements inserted, but it never needs to reallocate the existing pages and move every existing element, so it avoids the most expensive operations on vector, so vector's exponential growth is not needed to make deques cheap.
Amortized constant complexity for insertion at the front is (typically) achieved by assigning from the middle of the vector of pointers outwards.
For example, if we have a deque with 3 nodes, each holding up to 4 elements, we might assign its vector of 8 pointers thus:
[ nil, nil, nil, N1*, N2*, N3*, nil, nil ]
Here N1
and N3
might themselves be partially filled:
N1: [ nil, nil, nil, 1 ]
N2: [ 2, 3, 4, 5 ]
N3: [ 6, 7, nil, nil ]
As we push_front
onto the deque, first N1
is filled towards the front, then additional nodes are allocated and added to the vector of pointers. Once the vector of pointers is filled at the front, any additional push_front
results in an exponential expansion with the additional space assigned at the front:
[ N1*, N2*, N3*, N4*, N5*, N6*, nil, nil ]
| `---------------------------------------\
`-----------------------------------v v
[ nil, nil, nil, nil, nil, nil, nil, N0*, N1*, N2*, N3*, N4*, N5*, N6*, nil, nil ]
This exponential growth policy achieves O(1) amortized complexity for both deque::push_front
and deque::push_back
for the same reason that vector::push_back
has O(1) amortized complexity.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With