Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Efficiency of growing a dynamic array by a fixed constant each time?

So when a dynamic array is doubled in size each time an element is added, I understand how the time complexity for expanding is O(n) n being the elements. What about if the the array is copied and moved to a new array that is only 1 size bigger when it is full? (instead of doubling) When we resize by some constant C, it the time complexity always O(n)?

like image 858
Cassus Avatar asked Oct 02 '13 20:10

Cassus


People also ask

What is the time complexity of appending to a dynamic array?

When we increase the size of array by 1, then elements are copied from previous to new array and then new element is appended, it takes O(N).

How can you increase the size of a dynamically allocated array?

Allocate a new[] array and store it in a temporary pointer. Copy over the previous values that you want to keep. Delete[] the old array. Change the member variables, ptr and size to point to the new array and hold the new size.

What is the advantage of using dynamic arrays?

Dynamic arrays benefit from many of the advantages of arrays, including good locality of reference and data cache utilization, compactness (low memory use), and random access. They usually have only a small fixed additional overhead for storing information about the size and capacity.

What is an array how we can declare a fixed length array and dynamic array?

A fixed array is an array for which the size or length is determined when the array is created and/or allocated. A dynamic array is a random access, variable-size list data structure that allows elements to be added or removed. It is supplied with standard libraries in many modern programming languages.


1 Answers

If you grow by some fixed constant C, then no, the runtime will not be O(n). Instead, it will be Θ(n2).

To see this, think about what happens if you do a sequence of C consecutive operations. Of those operations, C - 1 of them will take time O(1) because space already exists. The last operation will take time O(n) because it needs to reallocate the array, add space, and copy everything over. Therefore, any sequence of C operations will take time O(n + c).

So now consider what happens if you perform a sequence of n operations. Break those operations up into blocks of size C; there will be n / C of them. The total work required to perform those operations will be

(c + c) + (2c + c) + (3c + c) + ... + (n + c)

= cn / c + (c + 2c + 3c + ... + nc / c)

= n + c(1 + 2 + 3 + ... + n / c)

= n + c(n/c)(n/c + 1)/2

= n + n(n/c + 1)/2

= n + n2 / c + n / 2

= Θ(n2)

Contrast this with the math for when you double the array size whenever you need more space: the total work done is

1 + 2 + 4 + 8 + 16 + 32 + ... + n

= 1 + 2 + 4 + 8 + ... + 2log n

= 2log n + 1 - 1

= 2n - 1

= Θ(n)


Transplanted from SO Documentation.

Sums of powers of 2 — 1 + 2 + 4 + 8 + 16 + …

The sum

20 + 21 + 22 + ... + 2n-1

simplifies to 2n - 1. This explains why the maximum value that can be stored in an unsigned 32-bit integer is 232 - 1.

like image 78
templatetypedef Avatar answered Sep 22 '22 03:09

templatetypedef