We're trying to utilise Ramda to avoid some brute-force programming. We have an array of objects that can look like this:
[
{id: "001", failedReason: [1000]},
{id: "001", failedReason: [1001]},
{id: "001", failedReason: [1002]},
{id: "001", failedReason: [1000]},
{id: "001", failedReason: [1000, 1003]},
{id: "002", failedReason: [1000]}
]
and we'd like to transform it so that it looks like this:
[
{id: "001", failedReason: [1000, 1001, 1002, 1003]},
{id: "002", failedReason: [1000]}
]
Essentially it reduces the array based on the id, and accumulates a sub-"failedReason" array containing all of the "failedReasons" for that id. We were hoping some Ramda magic might do this but so far we haven't found a nice means. Any ideas would be appreciated.
I can't easily test it on my phone, but something like this should work:
pipe(
groupBy(prop('id')),
map(pluck('failedReason')),
map(flatten),
map(uniq)
)
Update
I just got around to looking at this on a computer, and noted that the output wasn't quite what you were looking for. Adding two more steps would fix it:
pipe(
groupBy(prop('id')),
map(pluck('failedReason')),
map(flatten),
map(uniq),
toPairs,
map(zipObj(['id', 'failedReason']))
)
You can see this in action on the Ramda REPL.
You could define a wrapper type which satisfies the requirements of Monoid. You could then simply use R.concat
to combine values of the type:
// Thing :: { id :: String, failedReason :: Array String } -> Thing
function Thing(record) {
if (!(this instanceof Thing)) return new Thing(record);
this.value = {id: record.id, failedReason: R.uniq(record.failedReason)};
}
// Thing.id :: Thing -> String
Thing.id = function(thing) {
return thing.value.id;
};
// Thing.failedReason :: Thing -> Array String
Thing.failedReason = function(thing) {
return thing.value.failedReason;
};
// Thing.empty :: () -> Thing
Thing.empty = function() {
return Thing({id: '', failedReason: []});
};
// Thing#concat :: Thing ~> Thing -> Thing
Thing.prototype.concat = function(other) {
return Thing({
id: Thing.id(this) || Thing.id(other),
failedReason: R.concat(Thing.failedReason(this), Thing.failedReason(other))
});
};
// f :: Array { id :: String, failedReason :: Array String }
// -> Array { id :: String, failedReason :: Array String }
var f =
R.pipe(R.map(Thing),
R.groupBy(Thing.id),
R.map(R.reduce(R.concat, Thing.empty())),
R.map(R.prop('value')),
R.values);
f([
{id: '001', failedReason: [1000]},
{id: '001', failedReason: [1001]},
{id: '001', failedReason: [1002]},
{id: '001', failedReason: [1000]},
{id: '001', failedReason: [1000, 1003]},
{id: '002', failedReason: [1000]}
]);
// => [{"id": "001", "failedReason": [1000, 1001, 1002, 1003]},
// {"id": "002", "failedReason": [1000]}]
I'm sure you could give the type a better name than Thing. ;)
For fun, and mainly to explore the advantages of Ramda, I tried to come up with a "one liner" to do the same data conversion in plain ES6... I now fully appreciate the simplicity of Scott's answer :D
I thought I'd share my result because it nicely illustrates what a clear API can do in terms of readability. The chain of piped map
s, flatten
and uniq
is so much easier to grasp...
I'm using Map
for grouping and Set
for filtering duplicate failedReason
.
const data = [ {id: "001", failedReason: [1000]}, {id: "001", failedReason: [1001]}, {id: "001", failedReason: [1002]}, {id: "001", failedReason: [1000]}, {id: "001", failedReason: [1000, 1003]}, {id: "002", failedReason: [1000]} ];
const converted = Array.from(data
.reduce((map, d) => map.set(
d.id, (map.get(d.id) || []).concat(d.failedReason)
), new Map())
.entries())
.map(e => ({ id: e[0], failedReason: Array.from(new Set(e[1])) }));
console.log(converted);
If at east the MapIterator
and SetIterator
s would've had a .map
or even a.toArray
method the code would've been a bit cleaner.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With