I ran two test cases (multiple times), and it seems that iteratively adding values to my lists is faster than using addAll
String[] rawArgs = new String[]{"one", "two", "three", "four", "five"};
// More efficient - 894 ns
List<String> list = new ArrayList<>();
for (String s : rawArgs) {
list.add(s);
}
// Less efficient - 1340 ns
List<String> list = new ArrayList<>();
list.addAll(Arrays.asList(rawArgs));
I get notes through my IDE, as well as other people, that the latter way is the "proper" way to convert an array to that data structure. But if it's actually slower than the first, what advantage is there (some obscure type safety?), and for what reason should I be using the second?
Edit - Code benchmarking:
JVM Warming up, recreate the main class object first:
public static void main(String[] args) {
Internet test;
for (int i = 0; i < 15; i++) {
test = new Internet(); // JVM warmup
}
test = new Internet();
test.printOutput();
}
I simply take the system nanotime at both ends of the operation:
start = System.nanoTime();
/* function */
end = System.nanoTime();
result = end - start;
Wherein the test case, there are individual fields for each start/end, and results are calculated post-operation (The JVM is also preemptively warmed up by cycling instances before running the tests).
Edit 2 - Benchmarking with larger collections
After some testing (using Integer instead, not going to hand-write all the numbers), it appears larger collections is indeed slower:
With 100 numbers:
First operation: 18759ns
Second operation: 2680ns
Total operation: 21439ns
In summary, Collections.addAll could be faster when repeatedly adding only a few elements to a collection, but I doubt that this case would ever be a performance bottleneck.
The addAll () method of java. util. Collections class is used to add all of the specified elements to the specified collection. Elements to be added may be specified individually or as an array. The behavior of this convenience method is identical to that of c. addAll.
Some Collection implementations, for example LinkedList convert the passed collection back to an array before adding the elements, causing additional overhead.
The addAll (Collection collection) of java.util.Collection interface is used to add the Collection ‘collection’ to this existing collection. This method returns a boolean value depicting the successfulness of the operation.
The for-each loop resolves to the equivalent of
for (int i = 0; i < rawArgs.length; i++) {
list.add(rawArgs[i]);
}
...whereas the implementation of ArrayList.addAll
actually calls toArray()
, so it ends up calling Arrays.asList(rawArgs).toArray()
, which makes a redundant copy. That said, it also does a System.arraycopy
, which may end up making it faster than the for loop -- it could go either way, and according to some of the other benchmarks, it may actually go differently in different contexts.
The Collections.addAll(Collection<E>, E...)
static method is actually intended to address this specific issue and be faster than addAll(Arrays.asList))
, as explicitly stated in its Javadoc.
I think Collection.addAll()
is faster for at least two reasons:
For ArrayList
ensureCapacityInternal(size + numNew);
so, if the added List is big, and in a manual add it will be called many times, but in this case it will be called once, with necessary capacity.System.arraycopy(a, 0, elementData, size, numNew);
method for copying, which is a native and high performance method.Here's a benchmark, with slightly more elements in the array. Results clearly shows that the addAll
approach wins by margin:
public static void main(String[] args) {
String[] rawArgs = new String[]{"one", "two", "three", "four", "five",
"one", "two", "three", "four", "five",
"one", "two", "three", "four", "five",
"one", "two", "three", "four", "five",
"one", "two", "three", "four", "five",
"one", "two", "three", "four", "five",
"one", "two", "three", "four", "five",
"one", "two", "three", "four", "five"};
/******** WARM UP JVM *********/
for (int i = 0; i < 1000; ++i) {
arrayToListLoop(rawArgs);
}
for (int i = 0; i < 1000; ++i) {
arrayToListAddAll(rawArgs);
}
/** Actual measurement **/
long start = System.nanoTime();
for (int i = 0; i < 1000; ++i) {
arrayToListLoop(rawArgs);
}
long end = System.nanoTime();
System.out.println((end - start) / 1000);
start = System.nanoTime();
for (int i = 0; i < 1000; ++i) {
arrayToListAddAll(rawArgs);
}
end = System.nanoTime();
System.out.println((end - start) / 1000);
}
public static void arrayToListLoop(String[] arr) {
List<String> list = new ArrayList<>();
for (String s : arr) {
list.add(s);
}
}
public static void arrayToListAddAll(String[] arr) {
List<String> list = new ArrayList<>();
list.addAll(Arrays.asList(arr));
}
Results:
1 First Run:
2280
812
2 Second Run:
1336
613
3 Third Run:
2088
751
I tried repeating this experiment with a large number of iterations and had different results to you:
public static void main(String[] args) {
String[] rawArgs = new String[]{"one", "two", "three", "four", "five"};
long start = System.nanoTime();
for (int i = 0; i < 10000000; i++) {
List<String> list = new ArrayList<>();
for (String s : rawArgs) {
list.add(s);
}
}
long end = System.nanoTime();
System.out.println("add(): " + (end - start));
start = System.nanoTime();
for (int i = 0; i < 10000000; i++) {
List<String> list = new ArrayList<>();
list.addAll(Arrays.asList(rawArgs));
}
end = System.nanoTime();
System.out.println("addAll(): " + (end - start));
}
The results:
add(): 310726674
addAll(): 233785566
The exact numbers vary, but addAll
is always faster on my JVM (Sun JDK 1.7 running on Windows). The order with which the two are run makes no difference either (I tried both ways) so it's not to do with warming up. If you increase the number of elements, the results are even more dramatic, possibly because the array behind the ArrayList has to be resized (hence an additional arraycopy).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With