we are struggling with one unit test of mine for quite some while. During investigation we have found the root cause, which seems to be the comparison in floats (see the following code snippet where I have simplified the computation but it still fails).
TEST_F( MyFloatTest, thisOneDoesFail)
{
const float toCompare = 0.2f - 1.0f + 0.9f;
EXPECT_FLOAT_EQ( toCompare, 0.1f );
}
The result is:
Actual: 0.1 Expected: toCompare Which is: 0.099999964
Having some background in numerical mathematics, we still can't figure out why this test fails, while a custom float comparison using std::numerical_limits::epsilon passed. So at some point we began to think, that GTest is wrong and we debugged into it. It uses strange expressions, that we do not fully grab. What is even stranger: The following test passes, even though I just add a 1:
TEST_F( MyFloatTest, thisOnePasses)
{
const float toCompare = 1.2f - 1.0f + 0.9f;
EXPECT_FLOAT_EQ( toCompare, 1.1f );
}
We thought it might be some problem when including negative float values, but the next test also passes:
TEST_F( MyFloatTest, thisOnePassesAlso)
{
const float toCompare = 0.2f - 1.0f + 1.9f;
EXPECT_FLOAT_EQ( toCompare, 1.1f );
}
So for us it seems as the EXPECT_FLOAT_EQ macro of Gtest does simply have a problem around zero. Does anyone know of this behaviour? Have you ever saw similar in your environment? (btw: we use the MSVC2015) Does it just fail by accident due to the 4 ULP precision mentioned in GTest? (which is also not completely clear to us).
The problem is that a floating-point sum with a small value and large intermediate values will tend to have a large relative error. You reduce the error by writing
const float toCompare = 0.2f - (1.0f - 0.9f);
In your original code, the largest intermediate value was 0.2 - 1.0 = -0.8, eight times larger than the final result. With the changed code, the largest intermediate value is 0.1, equal to the final result. And if you check your example tests that passed, in each case you have no intermediate results that are large compared to the final result.
The problem is not with the EXPECT_FLOAT_EQ macro, but with the calculation.
Does it just fail by accident due to the 4 ULP precision mentioned in GTest?
That seems to me the case.
Try the following (very crude, not portable!) test code:
float toCompare = 0.2f - 1.0f + 0.9f;
int i = *reinterpret_cast<int*>(&toCompare);
std::cout << i << '\n';
float expected = 0.1f;
i = *reinterpret_cast<int*>(&expected);
std::cout << i << '\n';
On my system the output is:
1036831944
1036831949
The mantissae are exactly 5 ULPs apart. The 4 ULP comparison is not sufficient for the error of the calculation.
0.2f - 1.0f
is fine, there is no accuracy error at all on my system. What you're left with is -0.8f + 0.9f
. This is where the error comes from (on my system). I'm not an expert enough to tell you why this calculation has 5 ULP accuracy error.
In cases where certain degree of error is expected, use EXPECT_NEAR
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With