Let's say I am writing a unit test for a function that returns a floating point number, I can do it as such in full precision as per my machine:
>>> import unittest >>> def div(x,y): return x/float(y) ... >>> >>> class Testdiv(unittest.TestCase): ... def testdiv(self): ... assert div(1,9) == 0.1111111111111111 ... >>> unittest.main() . ---------------------------------------------------------------------- Ran 1 test in 0.000s OK
Will the same full floating point precision be the same across OS/distro/machine?
I could try to round off and do a unit test as such:
>>> class Testdiv(unittest.TestCase): ... def testdiv(self): ... assert round(div(1,9),4) == 0.1111 ... >>>
I could also do an assert with log(output)
but to keep to a fix decimal precision, I would still need to do rounding or truncating.
But what other way should one pythonically deal with unittesting for floating point output?
If you're using the PyCharm IDE, you can run unittest or pytest by following these steps: In the Project tool window, select the tests directory. On the context menu, choose the run command for unittest . For example, choose Run 'Unittests in my Tests…'.
Internally, unittest. main() is using a few tricks to figure out the name of the module (source file) that contains the call to main() . It then imports this modules, examines it, gets a list of all classes and functions which could be tests (according the configuration) and then creates a test case for each of them.
Which is better – pytest or unittest? Although both the frameworks are great for performing testing in python, pytest is easier to work with. The code in pytest is simple, compact, and efficient. For unittest, we will have to import modules, create a class and define the testing functions within that class.
assertAlmostEqual() in Python is a unittest library function that is used in unit testing to check whether two given values are almost equal or not. This function will take five parameters as input and return a boolean value depending upon the assert condition.
The precision of float
in Python is dependent on the underlying C representation. From Tutorial/Floating Point Arithmetic: Issues and Limitations, 15.1:
Almost all machines today (November 2000) use IEEE-754 floating point arithmetic, and almost all platforms map Python floats to IEEE-754 “double precision”.
As for testing, a better idea is to use existing functionality, e.g. TestCase.assertAlmostEqual
:
assertAlmostEqual(first, second, places=7, msg=None, delta=None)
Test that first and second are approximately (or not approximately) equal by computing the difference, rounding to the given number of decimal places (default 7), and comparing to zero. If delta is supplied instead of places then the difference between first and second must be less or equal to (or greater than) delta.
Example:
import unittest def div(x, y): return x / float(y) class Testdiv(unittest.TestCase): def testdiv(self): self.assertAlmostEqual(div(1, 9), 0.1111111111111111) self.assertAlmostEqual(div(1, 9), 0.1111, places=4) unittest.main() # OK
If you prefer to stick to assert
statement, you could use the math.isclose
(Python 3.5+):
import unittest, math def div(x, y): return x / float(y) class Testdiv(unittest.TestCase): def testdiv(self): assert math.isclose(div(1, 9), 0.1111111111111111) unittest.main() # OK
The default relative tolerance of math.close
is 1e-09, "which assures that the two values are the same within about 9 decimal digits.". For more information about math.close
see PEP 485.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With