Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Big O of JavaScript arrays

People also ask

What is the big O of an array?

In terms of Big O, arrays are favorable when we need fast access to elements. In the example above, we can access each element by its index; array[0] = “a” and array[4] = false. Since the elements are indexed, our computers know exactly where the element is and can go directly to the element.

Do JavaScript arrays start at 0 or 1?

JavaScript arrays are zero-indexed: the first element of an array is at index 0 , the second is at index 1 , and so on — and the last element is at the value of the array's length property minus 1 .

Is JavaScript shift O 1?

shift() methods which are used to remove an element from the end and beginning of an array respectively, work similarly. Array. pop() is O(1) while Array. shift() is O(n).

Are arrays faster than objects JavaScript?

The short version: Arrays are mostly faster than objects.


NOTE: While this answer was correct in 2012, engines use very different internal representations for both objects and arrays today. This answer may or may not be true.

In contrast to most languages, which implement arrays with, well, arrays, in Javascript Arrays are objects, and values are stored in a hashtable, just like regular object values. As such:

  • Access - O(1)
  • Appending - Amortized O(1) (sometimes resizing the hashtable is required; usually only insertion is required)
  • Prepending - O(n) via unshift, since it requires reassigning all the indexes
  • Insertion - Amortized O(1) if the value does not exist. O(n) if you want to shift existing values (Eg, using splice).
  • Deletion - Amortized O(1) to remove a value, O(n) if you want to reassign indices via splice.
  • Swapping - O(1)

In general, setting or unsetting any key in a dict is amortized O(1), and the same goes for arrays, regardless of what the index is. Any operation that requires renumbering existing values is O(n) simply because you have to update all the affected values.


guarantee

There is no specified time complexity guarantee for any array operation. How arrays perform depends on the underlying datastructure the engine chooses. Engines might also have different representations, and switch between them depending on certain heuristics. The initial array size might or might not be such an heuristic.

reality

For example, V8 uses (as of today) both hashtables and array lists to represent arrays. It also has various different representations for objects, so arrays and objects cannot be compared. Therefore array access is always better than O(n), and might even be as fast as a C++ array access. Appending is O(1), unless you reach the size of the datastructure and it has to be scaled (which is O(n)). Prepending is worse. Deletion can be even worse if you do something like delete array[index] (don't!), as that might force the engine to change its representation.

advice

Use arrays for numeric datastructures. That's what they are meant for. That's what engines will optimize them for. Avoid sparse arrays (or if you have to, expect worse performance). Avoid arrays with mixed datatypes (as that makes internal representations more complex).

If you really want to optimize for a certain engine (and version), check its sourcecode for the absolute answer.