I'm trying to implement a 1-dimensional convolution in "valid" mode (Matlab definition) in C++.
It seems pretty simple, but I haven't been able to find a code doing that in C++ (or any other language that I could adapt to as a matter of fact). If my vector size is a power, I can use a 2D convolution, but I would like to find something that would work for any input and kernel.
So how to perform a 1-dimensional convolution in "valid" mode, given an input vector of size I and a kernel of size K (the output should normally be a vector of size I - K + 1).
Pseudocode is also accepted.
You could use the one of the following implementations:
template<typename T>
std::vector<T>
conv(std::vector<T> const &f, std::vector<T> const &g) {
int const nf = f.size();
int const ng = g.size();
int const n = nf + ng - 1;
std::vector<T> out(n, T());
for(auto i(0); i < n; ++i) {
int const jmn = (i >= ng - 1)? i - (ng - 1) : 0;
int const jmx = (i < nf - 1)? i : nf - 1;
for(auto j(jmn); j <= jmx; ++j) {
out[i] += (f[j] * g[i - j]);
}
}
return out;
}
f
: First sequence (1D signal).
g
: Second sequence (1D signal).
returns a std::vector
of size f.size() + g.size() - 1
, which is the result of the discrete convolution aka. Cauchy product (f * g) = (g * f)
.
template<typename T>
std::vector<T>
conv_valid(std::vector<T> const &f, std::vector<T> const &g) {
int const nf = f.size();
int const ng = g.size();
std::vector<T> const &min_v = (nf < ng)? f : g;
std::vector<T> const &max_v = (nf < ng)? g : f;
int const n = std::max(nf, ng) - std::min(nf, ng) + 1;
std::vector<T> out(n, T());
for(auto i(0); i < n; ++i) {
for(int j(min_v.size() - 1), k(i); j >= 0; --j) {
out[i] += min_v[j] * max_v[k];
++k;
}
}
return out;
}
f
: First sequence (1D signal).
g
: Second sequence (1D signal).
returns a std::vector
of size std::max(f.size(), g.size()) - std::min(f.size(), g.size()) + 1
, which is the result of the valid (i.e., with out the paddings) discrete convolution aka. Cauchy product (f * g) = (g * f)
.
In order to perform a 1-D valid convolution on an std::vector (let's call it vec for the sake of the example, and the output vector would be outvec) of the size l it is enough to create the right boundaries by setting loop parameters correctly, and then perform the convolution as usual, i.e.:
for(size_t i = K/2; i < l - K/2; ++i)
{
outvec[i] = 0.0;
for(size_t j = 0; j < K+1; j++)
{
outvec[i - K/2] += invec[i - K/2 + j] * kernel[j];
}
}
Note the starting and the final value of i.
Works for any 1-D kernel of any size - provided that the kernel is not of bigger size than vector ;)
Note that I've used the variable K as you've described it, but personally I would've understand the 'size' different - a matter of taste I guess. In this example, the total length of the kernel vector is K+1. I've also assumed that the outvec already has l - K elements (BTW: output vector has l - K elements, not l - K + 1 as you have written), so no push_back() is needed.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With