The following code fragment is part of a method that gets a directory listing, calls an extract method on each file and serializes the resulting drug object to xml.
try(Stream<Path> paths = Files.list(infoDir)) {
paths
.parallel()
.map(this::extract)
.forEachOrdered(drug -> {
try {
marshaller.write(drug);
} catch (JAXBException ex) {
ex.printStackTrace();
}
});
}
Here is the exact same code doing the exact same thing but using a plain .list()
call to get the directory listing and calling .parallelStream()
on the resulting list.
Arrays.asList(infoDir.toFile().list())
.parallelStream()
.map(f -> infoDir.resolve(f))
.map(this::extract)
.forEachOrdered(drug -> {
try {
marshaller.write(drug);
} catch (JAXBException ex) {
ex.printStackTrace();
}
});
My machine is a quad core MacBook Pro, Java v 1.8.0_60 (build 1.8.0_60-b27).
I am processing ~ 7000 files. The averages of 3 runs:
First version:
With .parallel()
: 20 seconds. Without .parallel()
: 41 seconds
Second version:
With .parallelStream()
: 12 seconds. With .stream()
: 41 seconds.
Those 8 seconds in parallel mode seem like an enormous difference given that the extract
method that reads from the stream and does all the heavy work and the write
call doing the final writes are unchanged.
The problem is that current implementation of Stream API along with the current implementation of IteratorSpliterator
for unknown size source badly splits such sources to parallel tasks. You were lucky having more than 1024 files, otherwise you would have no parallelization benefit at all. Current Stream API implementation takes into account the estimateSize()
value returned from Spliterator
. The IteratorSpliterator
of unknown size returns Long.MAX_VALUE
before split and its suffix always returns Long.MAX_VALUE
as well. Its splitting strategy is the following:
MAX_BATCH
size is reached (which is 33554432 elements).ArraySpliterator
iterating over the created array as prefix, leaving itself as suffix.Suppose you have 7000 files. Stream API asks for estimated size, IteratorSpliterator
returns Long.MAX_VALUE
. Ok, Stream API asks the IteratorSpliterator
to split, it collects 1024 elements from the underlying DirectoryStream
to the array and splits to ArraySpliterator
(with estimated size 1024) and itself (with estimated size which is still Long.MAX_VALUE
). As Long.MAX_VALUE
is much much more than 1024, Stream API decides to continue splitting the bigger part without even trying to split the smaller part. So the overall splitting tree goes like this:
IteratorSpliterator (est. MAX_VALUE elements)
| |
ArraySpliterator (est. 1024 elements) IteratorSpliterator (est. MAX_VALUE elements)
| |
/---------------/ |
| |
ArraySpliterator (est. 2048 elements) IteratorSpliterator (est. MAX_VALUE elements)
| |
/---------------/ |
| |
ArraySpliterator (est. 3072 elements) IteratorSpliterator (est. MAX_VALUE elements)
| |
/---------------/ |
| |
ArraySpliterator (est. 856 elements) IteratorSpliterator (est. MAX_VALUE elements)
|
(split returns null: refuses to split anymore)
So after that you have five parallel tasks to be executed: actually containing 1024, 2048, 3072, 856 and 0 elements. Note that even though the last chunk has 0 elements, it still reports that it has estimatedly Long.MAX_VALUE
elements, so Stream API will send it to the ForkJoinPool
as well. The bad thing is that Stream API thinks that further splitting of first four tasks is useless as their estimated size is much less. So what you get is very uneven splitting of the input which utilizes four CPU cores max (even if you have much more). If your per-element processing takes roughly the same time for any element, then the whole process would wait for the biggest part (3072 elements) to complete. So maximum speedup you may have is 7000/3072=2.28x. Thus if sequential processing takes 41 seconds, then the parallel stream will take around 41/2.28 = 18 seconds (which is close to your actual numbers).
Your work-around solution is completely fine. Note that using Files.list().parallel()
you also have all the input Path
elements stored in the memory (in ArraySpliterator
objects). Thus you will not waste more memory if you manually dump them into the List
. Array-backed list implementations like ArrayList
(which is currently created by Collectors.toList()
) can split evenly without any problems, which results in additional speed-up.
Why such case is not optimized? Of course it's not impossible problem (though implementation could be quite tricky). It seems that it's not high-priority problem for JDK developers. There were several discussions on this topic in mailing lists. You may read Paul Sandoz message here where he comments on my optimization effort.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With