I've got large json array of objects that I need to filter down based on multiple user select inputs. Currently I'm chaining filter functions together but I've got a feeling this is most likely not the most performant way to do this.
Currently I'm doing this:
var filtered = data.filter(function(data) { return Conditional1 }) .filter(function(data) { return Conditional2 }) .filter(function(data) { return Conditional3 }) etc...;
Although (I think) with each iteration 'data' could be less, I'm wondering if a better practice would be to do something like this:
var condition1 = Conditional1 var condition2 = Conditional2 var condition3 = Conditional3 etc... var filtered = data.filter(function(data) { return condition1 && condition2 && condition3 && etc... });
I've looked into multiple chains of higher order functions, specifically the filter function - but I haven't seen anything on best practice (or bad practice, nor have I timed and compared the two I've suggested).
In a use case with a large data set and many conditionals which would be preferred (I reckon they are both fairly easily readable)?
Or maybe there is a more performant way that I'm missing (but still using higher-order functions).
Filtering works hand-in-hand with two other functional Array methods from ES5, map and reduce . And thanks to the ability to chain methods in JavaScript, you can use this combination to craft very clean code that performs some pretty complex functions.
If you want to put multiple conditions in filter , you can use && and || operator.
To our surprise, for-loops are much faster than the Array. filter method. To be precise, the Filter method is 77% slower than for loop.
Because both map and filter return Arrays, we can chain these functions together to build complex array transformations with very little code. Finally we can consume the newly created array using forEach .
Store your filter functions in an array and have array.reduce()
run through each filter, applying it to the data. This comes at the cost of running through all of them even when there's no more data to filter.
const data = [...] const filters = [f1, f2, f3, ...] const filteredData = filters.reduce((d, f) => d.filter(f) , data)
Another way to do it is to use array.every()
. This takes the inverse approach, running through the data, and checking if all filters apply. array.every()
returns false as soon as one item returns false
.
const data = [...] const filters = [f1, f2, f3, ...] const filteredData = data.filter(v => filters.every(f => f(v)))
Both are similar to your first and second samples, respectively. The only difference is it doesn't hardcode the filters or conditions.
interesting question
data = new Array(111111).fill().map((a,n) => n); const f1 = (a) => a % 2; const f2 = (a) => a % 5; const f3 = (a) => a > 347; const filters = [f1, f2, f3]; // 1 t1 = performance.now(); res = data.filter(a=>a%2).filter(a=>a%5).filter(a=>a>347); t2 = performance.now(); console.log("1) took " + (t2-t1) + " milliseconds."); // 2 t1 = performance.now(); res = data.filter(a=>a%2 && a%5 && a>347); t2 = performance.now(); console.log("2) took " + (t2-t1) + " milliseconds."); // 3 t1 = performance.now(); res = filters.reduce((d, f) => d.filter(f) , data) t2 = performance.now(); console.log("3) took " + (t2-t1) + " milliseconds."); // 4 t1 = performance.now(); res = data.filter(v => filters.every(f => f(v))) t2 = performance.now(); console.log("4) took " + (t2-t1) + " milliseconds.");
also remember in for-loop and for example with case of two loops one 3000 and one 7 then : 3000x7 > 7x3000 in time measuring .
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With