While updating for loops to for-each loops in our application, I came across a lot of these "patterns":
for (int i = 0, n = a.length; i < n; i++) { ... }
instead of
for (int i = 0; i < a.length; i++) { ... }
I can see that you gain performance for collections because you don't need to call the size() method with each loop. But with arrays??
So the question arose: is array.length
more expensive than a regular variable?
By definition, the length property of an array is an unsigned, 32-bit integer that is always numerically greater than the highest index in the array. The value of the length is 232. It means that an array can hold up to 4294967296 (232) elements.
To get the size of a Java array, you use the length property. To get the size of an ArrayList, you use the size() method. Know the difference between the Java array length and the String's length() method.
To find the length of an array, use array data member 'length'. 'length' gives the number of elements allocated, not the number inserted. Write a class with a main method that creates an array of 10 integers and totals them up. The elements of an array can be of any type, including a programmer-defined class.
No, a call to array.length
is O(1)
or constant time operation.
Since the .length
is(acts like) a public
final
member of array
, it is no slower to access than a local variable. (It is very different from a call to a method like size()
)
A modern JIT compiler is likely to optimize the call to .length
right out anyway.
You can confirm this by either looking at the source code of the JIT compiler in OpenJDK, or by getting the JVM to dump out the JIT compiled native code and examining the code.
Note that there may be cases where the JIT compiler can't do this; e.g.
I had a bit of time over lunch:
public static void main(String[] args) { final int[] a = new int[250000000]; long t; for (int j = 0; j < 10; j++) { t = System.currentTimeMillis(); for (int i = 0, n = a.length; i < n; i++) { int x = a[i]; } System.out.println("n = a.length: " + (System.currentTimeMillis() - t)); t = System.currentTimeMillis(); for (int i = 0; i < a.length; i++) { int x = a[i]; } System.out.println("i < a.length: " + (System.currentTimeMillis() - t)); } }
The results:
n = a.length: 672 i < a.length: 516 n = a.length: 640 i < a.length: 516 n = a.length: 656 i < a.length: 516 n = a.length: 656 i < a.length: 516 n = a.length: 640 i < a.length: 532 n = a.length: 640 i < a.length: 531 n = a.length: 641 i < a.length: 516 n = a.length: 656 i < a.length: 531 n = a.length: 656 i < a.length: 516 n = a.length: 656 i < a.length: 516
Notes:
n = a.length
shows as being faster than i < a.length
by about half, probably due to garbage collection(?).250000000
much larger because I got OutOfMemoryError
at 270000000
.The point is, and it is the one everyone else has been making, you have to run Java out of memory and you still don't see a significant difference in speed between the two alternatives. Spend your development time on things that actually matter.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With