I have a class Vector that represents a point in 3-dimensional space. This vector has a method normalize(self, length = 1)
which scales the vector down/up to be length == vec.normalize(length).length
.
The unittest for this method sometimes fails because of the imprecision of floating-point numbers. My question is, how can I make sure this test does not fail when the methods are implemented correctly? Is it possible to do it without testing for an approximate value?
Additional information:
def testNormalize(self): vec = Vector(random.random(), random.random(), random.random()) self.assertEqual(vec.normalize(5).length, 5)
This sometimes results in either AssertionError: 4.999999999999999 != 5
or AssertionError: 5.000000000000001 != 5
.
Note: I am aware that the floating-point issue may be in the Vector.length
property or in Vector.normalize()
.
Use assertAlmostEqual
, assertNotAlmostEqual
.
From the official documentation:
assertAlmostEqual(first, second, places=7, msg=None, delta=None)
Test that first and second are approximately equal by computing the difference, rounding to the given number of decimal places (default 7), and comparing to zero.
Esentially No.
The floating point issue can't be bypassed, so you have either to "round" the result given by vec.normalize
or accept an almost-equal result (each one of the two is an approximation).
By using a floating point value, you accept a small possible imprecision. Therefore, your tests should test if your computed value falls in an acceptable range such as:
theoreticalValue - epsilon < normalizedValue < theoreticalValue + epsilon
where epsilon
is a very small value that you define as acceptable for a variation due to floating point imprecision.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With