I have this example code:
#include <Eigen/Eigen>
#include <iostream>
int main() {
Eigen::MatrixXf M = Eigen::MatrixXf::Random(1000, 1000);
std::cout.precision(17);
std::cout << M.colwise().sum().sum() << std::endl;
std::cout << M.rowwise().sum().sum() << std::endl;
std::cout << M.sum() << std::endl;
}
I compile with the following command: (g++ version 7.3, but I have seen this with other compilers too)
g++ -O0 -o test -Ieigen-3.3.7 test.cc
And the output is
13.219823837280273
13.220325469970703
13.217720031738281
Shouldn't all these 3 values be the same? I am using no optimizations after all.
Your additions are basically a random walk, and the error you make is a different random walk (because you have roundoff error at almost every step). (Note that Eigen::MatrixXf::Random
fills the matrix with random values in [-1, 1]
.)
Let's assume that you are, on average, at a float value of 10.0
(estimated only from that single data point you provided). Your epsilon (how much absolute rounding error you will probably make with any addition) is thus around 10.0 * 6e-8
(float epsilon is 2-23 or about 6e-8) or about 6e-7
.
If you do N = 1000000
random error-accumulation steps of step size +6e-7
(or -6e-7
), you have a good chance of ending up at around sqrt(N) * stepSize = 1000 * 6e-7 = 6e-4
(see here), which is not-too-coincidentally close to your 0.01%.
I would similarly estimate an absolute error of 1000 * 10 * 1e-16 = 1e-12
for the addition of 1 million random doubles between -1 and 1 due to floating point precision.
This is obviously not a rigorous mathematical treatment. It just shows that the error is certainly in the right ballpark.
The common way to reduce this issue is to sort the floats in order of ascending magnitude before adding them, but you can still be arbitrarily imprecise when doing so. (Example: Keep adding the number 1.0f
to itself - the sum will stop increasing at 2^24
where the epsilon becomes larger than 1.0f
.)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With