I have some C# unit tests that perform some float/double operations and I would like to unit test them. Assert.AreEqual is insufficient because of rounding errors.
Take unit conversion as an example. 10.5 meters to feet has a conversion factor of 3.281 so I get 34.4505. Using a more accurate conversion factor gives me 34.4488189. I want to test this within, say, 0.1 (so 34.3488-34.5488 would pass the test).
I could certain manually test the value with a tolerance in my unit test but that's highly repetitive and the failure message wouldn't be very descriptive (again without having to write my own Assert failure message):
Assert.IsTrue(Math.Abs(34.4488189 - value) < 0.1);
How can I unit test my float operations to within a certain error tolerance? I cannot find any Assert classes that do this that come with VS. Am I missing it or do I have to roll my own?
Are there standard practices in testing floats/doubles to keep in mind?
Assert.AreEqual in MSTest has overloads that accept a delta (error tolerance) parameter:
public static void AreEqual (double expected, double actual, double delta)
for example:
Assert.AreEqual(34.4488189, value, 0.1);
or, for the smallest possible tolerance:
Assert.AreEqual(34.4488189, value, double.Epsilon);
You could take a look at the NUnit framework:
//Compare Float values Assert.AreEqual(float expected, float actual, float tolerance); Assert.AreEqual(float expected, float actual, float tolerance, string message); //Compare Double values Assert.AreEqual(double expected, double actual, double tolerance); Assert.AreEqual(double expected, double actual, double tolerance, string message)
(Above taken from this article)
More listed here.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With