In a typical dynamic array implementation, we double the stack when there is no room for a new element. In this case of doubling the average time for push operation is O(n).
What is the complexity of a push, if instead of doubling, we increased the stack size by (n+k) ?
My approach is as follows
Assuming that stack was empty, and k=10 , we increase the stack to 10 elements. After 10 elements , we make it 20 elements and so on.
Average time to copy elements around is 10 + 20 + 30 + ...
So average time for a push must be in the order of O(n) ?
Is my approach correct ?
Your computation is not correct. There is a reason why typical implementation of dynamic array doubles its size (or, more generally, increases its size by x percent).
If you grow the size from 1 to n = 2i, the total amount of elements that you copy is 1 + 2 + 4 + 8 + 16 + ... + 2i. If you sum this, it's smaller than 2i+1, so the total time complexity is O(n) and amortized time complexity of one insertion is O(1). If n is not a power of two, the computation will be slightly more complicated, but the result will be the same.
On the other hand, if you increase the size by k, from 0 to n = i * k, the total amount of elements that you copy is k + 2k + 3k + ... + ik. If you sum this, it will be (ik + k) * (ik / k) / 2 = O(n2). And the amortized complexity of one insertions will be O(n).
Because of this, your solution is much worse than doubling the size.
Depending on how you would increase the storage and k is small enough, it could be O(1) for all cases or something else.
By 'how', I mean, in managed languages, one would create a new array of size n + k and then copy the old array (of size n) to the new one, and finally replacing reference of the old array to the new one. That would be: O(1) (array creation, it's an assumption) + O(n) (data copy) + O(1) (reference change). We ignore garbage collector execution because it's very implementation dependant. In non-managed languages, one could use something similar to realloc
such that the old elements are preserved without the need to copy because the new storage occupies the same memory region, only extended to the number of required items. In this case, it's O(1) for all cases. Note that however sometimes extending the storage to the number of required items is impossible due to memory fragmentation, so instead an approach like in managed languages is taken (but is done implicitly by realloc
implementation). For this one, the complexity is the same as in managed languages: O(n).
That's the answer from practical point of view, I hope you can find the answer from analytical point of view using above answer. Good luck.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With