Given some maps, is there a one-line way to put all their entries into one map?
Ignoring issues of nulls, over-writing entries etc, what I would like to code is:
public static <K, V> Map<K, V> reduce(Map<K, V>... maps) {
return Arrays.stream(maps)
.reduce(new HashMap<K, V>(), (a, b) -> a.putAll(b));
}
but this gives a compile error, because a.putAll(b)
is void
. If it returned this
, it would work.
To work around this, I coded:
public static <K, V> Map<K, V> reduce(Map<K, V>... maps) {
return Arrays.stream(maps)
.reduce(new HashMap<K, V>(), (a, b) -> {a.putAll(b); return a;});
}
which compiles and works, but it's an ugly lambda; coding return a;
feels redundant.
One approach is to refactor out a utility method:
public static <K, V> Map<K, V> reduce(Map<K, V> a, Map<K, V> b) {
a.putAll(b);
return a;
}
which cleans up the lambda:
public static <K, V> Map<K, V> reduce(Map<K, V>... maps) {
return Arrays.stream(maps)
.reduce(new HashMap<K, V>(), (a, b) -> reduce(a, b));
}
but now I have a, albeit reusable, somewhat useless utility method.
Is there a more elegant way to call a method on the accumulator and return it within a lambda?
concat() Alternatively, we can use Stream#concat() function to merge the maps together. This function can combine two different streams into one. As shown in the snippet, we are passed the streams of map1 and map2 to the concate() function and then collected the stream of their combined entry elements.
First, we create a result map with two entries, one for each partition. The values are LinkedHashMap s so that insertion order is preserved. Then, we create a HashSet from the list, so that invoking set. contains(k) is a O(1) operation (otherwise, if we did list.
flatMap , as it can be guessed by its name, is the combination of a map and a flat operation. That means that you first apply a function to your elements, and then flatten it. Stream. map only applies a function to the stream without flattening the stream.
reduce
works similarly to
U result = identity;
for (T element : this stream)
result = accumulator.apply(result, element)
return result;
which means that the lambda representing accumulator.apply
needs to return
result (final or intermediate one).
If you want to avoid this behaviour use collect
which works more like
R result = supplier.get();
for (T element : this stream)
accumulator.accept(result, element);
return result;
so lambda representing accumulator.accept
don't need to return
any value but to modify result
based on element
.
Example:
public static <K, V> Map<K, V> reduce(Map<K, V>... maps) {
return Arrays.stream(maps)
.collect(HashMap::new, Map::putAll, Map::putAll);
// ^ ^
// | collect results from parallel streams
// collect results in single thread
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With