I am testing basic math functions that'll return its mean/variance/standard deviation. The problem I am facing is that I cannot get the precision of "expected value" to math what is returned by the function. For example, if the variance function returns 50.5555555555566, even if I set the expected value explicitly to 50.5555555555566, it'll say they're two different doubles and the unit test fails.
Below is the actual output from the unit test:
Assert.AreEqual failed. Expected:<50.5555555555556>. Actual:<50.5555555555566>.
Can anyone advise on a way around this? I am using the built-in visual studio unit testing suite. Thanks.
Floating point (Single/Double)numbers need to be tested with a tolerance value. So you can say if the two numbers are within 0.0001 (tolerance) of each other, consider them equal
In NUnit, you have comparison asserts. e.g. the following overload of AreEqual, find the equivalent one for MSTest..
Assert.AreEqual( double expected, double actual, double tolerance,
string message );
Update: This should be the method you need in MSTest. Try and see if it resolves your issue..
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With