Several algorithms in std::ranges (such as count_if, find_if, sort, etc.) accept both a predicate and a projection.
What are the advantages of using both, as opposed to passing just a predicate that does the projection part internally?
If you already have a unary predicate and a projection lying around, then using them separately could be more convenient:
struct Point { int x, y; };
auto any_even_xs(span<Point const> points) -> bool {
// using projection
return ranges::any_of(points, is_even, &Point::x);
// using lambda
return ranges::any_of(points, [](Point p) { return is_even(p.x); });
// using adaptor
return ranges::any_of(points, proj(&Point::x, is_even));
}
Really the whole point of splitting the projection from the predicate is precisely when you have the two functions separate already. This would be a lot better if C++ had named function arguments so there would be some annotation on the call side of what is_even and &Point::x mean on the call side.
The motivating example was wanting to use the same binary predicate (e.g. std::less{}) in multiple algorithms that use that predicate in slightly different ways (sort, lower_bound, upper_bound) that require a projection:
ranges::sort(a, std::less<>(), &employee::last);
auto p = ranges::lower_bound(a, "Parent", std::less<>(), &employee::last);
Whereas with lambdas this would look like:
ranges::sort(a, [](const employee& x, const employee& y)
{ return x.last < y.last; });
auto p = ranges::lower_bound(a, "Parent", [](const employee& x, const string& y)
{ return x.last < y; });
Projections are transformations of the content before you reason about it. Algebraically, they can occur for any kind of element test, be it a unary or binary or n-ary test.
Predicates return true or false. They can be unary or binary. How you combine Projections and Predicates with C++ algorithms is almost completely determined by the n-ary degree of the Predicate, but not completely.
Someone manually combining them has to do the right thing (tm), or they get problems. Making users write those lambda glues ends up wasting a lot of developer time and attention - it violates DRY.
Compare how lower_bound uses predicates and projections to how sort does. In lower_bound we apply the projection only to the elements of the range, then we use the predicate on the pair of "external argument" and "transformed element".
For sort, we apply the projection again to the elements of the range - but now, both sides of the predicate are being transformed!
We could stick the projection on top of the range itself and adapt it, but now we get a new range: and transforming information or operations about the adapted range back into operations on the original range is tricky.
We uniformly want to inspect the range through the projection: this isn't purely a modification of a binary predicate, because any argument passed to the binary predicate that doesn't come from the range doesn't get projected. And this isn't a adaption of the range either; when not inspecting the element, we want the actual element value, if we mutate it or return information about it (like an iterator to it).
However, when we pass projections, predicates and sometimes "external values" to turn 2-ary operations into 1-ary operations, the algorithms can make the most natural and correct code be generated.
There is possibly some algebraic approach that could do this without all of these special cases; but I don't see an obvious one.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With