Consider this simple generalization of std::transform I wrote for N input iterators:
#include <iostream>
#include <vector>
#include <string>
template <typename InputIterator, typename OutputIterator, typename NaryOperator, typename... InputIterators>
OutputIterator transform (InputIterator first, InputIterator last, OutputIterator result,
NaryOperator op, InputIterators... iterators) {
while (first != last) {
*result = op(*first, *iterators++...);
++result; ++first;
}
return result;
}
int main() {
const std::vector<int> a = {1,2,3,4,5};
const std::vector<double> b = {1.2, 4.5, 0.6, 2.8, 3.1};
const std::vector<std::string> c = {"hi", "howdy", "hello", "bye", "farewell"};
std::vector<double> result(5);
transform (a.begin(), a.end(), result.begin(),
[](int i, double d, const std::string& s)->double {return i + d + s.length();},
b.begin(), c.begin());
for (double x : result) std::cout << x << ' '; // 4.2 11.5 8.6 9.8 16.1
}
What I want to do now, is allow the vectors a
, b
, c
to have different lengths (and the argument InputIterator last
can be removed), in which case transform
will continue transforming until the longest vector is used up, using the default values for the vectors that are shorter.
I thought this would just be a matter of resizing all the short containers within the transform
function, but the arguments of transform
give no information about how long all the containers are. Is there a way to compute within transform
how long each of the containers are, thereby getting the maximum length and thus filling the default values for the shorter containers? Ideally using just the syntax:
transform (OutputIterator result, NaryOperator op, InputIterators... iterators);
Update: Following Ramana's idea, I am thinking of using something like:
template <typename OutputIterator, typename NaryOperator, typename... InputIterators>
OutputIterator transform (OutputIterator result, NaryOperator op, InputIterators... first, InputIterators... last) {
while (true) {
*result = op((first == last ?
typename std::iterator_traits<InputIterators>::value_type() : *first++)...);
++result;
}
return result;
}
But
transform (result.begin(),
[](int i, double d, const std::string& s)->double {return i + d + s.length();},
a.begin(), b.begin(), c.begin(), a.end(), b.end(), c.end());
does not compile. I think because the compiler does not know where last...
begins.
So I tried this next:
template <typename OutputIterator, typename NaryOperator, typename... InputIteratorsPairs>
OutputIterator transform (OutputIterator result, NaryOperator op, InputIteratorsPairs... pairs) {
while (true) {
*result = op((pairs.first == pairs.second ?
typename InputIteratorsPairs::first_type() : *pairs.first++)...);
++result;
}
return result;
}
But
transform_ (result.begin(),
[](int i, double d, const std::string& s)->double {return i + d + s.length();},
std::make_pair(a.begin(), a.end()), std::make_pair(b.begin(), b.end()), std::make_pair(c.begin(), c.end()));
does not compile either (and I don't like the syntax anyway).
#include <cstddef>
#include <utility>
#include <tuple>
#include <iterator>
bool all(bool a)
{
return a;
}
template <typename... B>
bool all(bool a, B... b)
{
return a && all(b...);
}
template <typename OutputIterator, typename NaryOperator, typename... InputIterators, std::size_t... Is>
OutputIterator transform(OutputIterator result, NaryOperator op, std::index_sequence<Is...>, InputIterators... iterators)
{
auto tuple = std::make_tuple(iterators...);
while (!all(std::get<2*Is>(tuple) == std::get<2*Is + 1>(tuple)...))
{
*result = op((std::get<2*Is>(tuple) != std::get<2*Is + 1>(tuple)
? *std::get<2*Is>(tuple)++
: typename std::iterator_traits<typename std::tuple_element<2*Is, decltype(tuple)>::type>::value_type{})...);
++result;
}
return result;
}
template <typename OutputIterator, typename NaryOperator, typename... InputIterators>
OutputIterator transform(OutputIterator result, NaryOperator op, InputIterators... iterators)
{
return transform(result, op, std::make_index_sequence<sizeof...(InputIterators)/2>{}, iterators...);
}
Tests:
int main()
{
const std::vector<int> a = {1,2,3,4,5};
const std::vector<double> b = {1.2, 4.5, 0.6, 2.8, 3.1};
const std::vector<std::string> c = {"hi", "howdy", "hello", "bye", "farewell"};
std::vector<double> result(5);
transform(result.begin(),
[] (int i, double d, const std::string& s) -> double
{
return i + d + s.length();
},
a.begin(), a.end(),
b.begin(), b.end(),
c.begin(), c.end());
for (double x : result) std::cout << x << ' ';
}
Output:
4.2 11.5 8.6 9.8 16.1
DEMO
pairs.first == pairs.second ?
typename InputIteratorsPairs::first_type() : *pairs.first++
You are value-initializing an iterator on the left side of :
instead of the type the iterator points to. Also you have an infinite loop and undefined behavior because you keep incrementing result
. Here's a version that fixes these issues (requires <algorithm>
and is not necessarily most efficient:
bool any(std::initializer_list<bool> vs)
{
return std::any_of(begin(vs), end(vs), [](bool b) { return b; });
}
template<typename OutputIterator, typename NaryOperator, typename... InputIteratorsPairs>
OutputIterator transform(OutputIterator result, NaryOperator op, InputIteratorsPairs... pairs) {
while (any({(pairs.first != pairs.second)...})) {
*result = op((pairs.first == pairs.second ?
typename InputIteratorsPairs::first_type::value_type() : *pairs.first++)...);
++result;
}
return result;
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With