I am currently working with a List<Map<String, Object>>
where I am trying to group across various keys within the map.
This seems to work nicely using the Java 8 Stream
s:
Map<Object, Map<Object, List<Map<String, Object>>>> collect =
list
.stream()
.collect(Collectors.groupingBy(
item -> item.get("key1"),
Collectors.groupingBy(item -> item.get("key2"))
));
As expected this give me a Map<Object, Map<Object, List<Map<String, Object>>>>
which works well where the possible grouped results are greater than 1.
I have various examples where the grouping being done will always result in a single item in the lowest level list, for example.
List of Rows
{
[reference="PersonX", firstname="Person", dob="test", lastname="x"],
[reference="JohnBartlett", firstname="John", dob="test", lastname="Bartlett"]
}
Grouped by reference
Currently - grouped into a list with 1 Map<String,Object>
[PersonX, { [reference="PersonX", firstname="Person", dob="test", lastname="x"]}],
[JohnBartlett, { [reference="JohnBartlett", firstname="John", dob="test", lastname="Bartlett"]}]
Preference - No List just a single Map<String,Object>
[PersonX, [reference="PersonX", firstname="Person", dob="test", lastname="x"]],
[JohnBartlett, [reference="JohnBartlett", firstname="John", dob="test", lastname="Bartlett"]]
is there a way within the streams to force the output for these instances to be Map<Object, Map<Object, Map<String, Object>>>
- so a single Map<String,Object>
rather than a List
of them?
Any help would be greatly appreciated.
In Java 8 Stream, filter with Set. Add() is the fastest algorithm to find duplicate elements, because it loops only one time. Set<T> items = new HashSet<>(); return list.
Get the stream of elements in which the duplicates are to be found. For each element in the stream, count the frequency of each element, using Collections. frequency() method. Then for each element in the collection list, if the frequency of any element is more than one, then this element is a duplicate element.
CopyTo(Stream) Reads the bytes from the current stream and writes them to another stream. Both streams positions are advanced by the number of bytes copied.
You can use the Stream. distinct() method to remove duplicates from a Stream in Java 8 and beyond. The distinct() method behaves like a distinct clause of SQL, which eliminates duplicate rows from the result set.
If I understood correctly, then for the cases where you are sure that there is a single item, you should just replace:
.collect(Collectors.groupingBy(
item -> item.get("key1"),
Collectors.toMap(item -> item.get("key2"), Function.identity())
));
You can even provide a third argument as a BinaryOperator
to merge your same entries (in case you need to)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With