As a part of some unit testing code that I'm writing, I wrote the following function. The purpose of which is to determine if 'a' could be rounded to 'b', regardless of how accurate 'a' or 'b' are.
def couldRoundTo(a,b):
"""Can you round a to some number of digits, such that it equals b?"""
roundEnd = len(str(b))
if a == b:
return True
for x in range(0,roundEnd):
if round(a,x) == b:
return True
return False
Here's some output from the function:
>>> couldRoundTo(3.934567892987, 3.9)
True
>>> couldRoundTo(3.934567892987, 3.3)
False
>>> couldRoundTo(3.934567892987, 3.93)
True
>>> couldRoundTo(3.934567892987, 3.94)
False
As far as I can tell, it works. However, I'm scared of relying on it considering I don't have a perfect grasp of issues concerning floating point accuracy. Could someone tell me if this is an appropriate way to implement this function? If not, how could I improve it?
The round() function returns a floating point number that is a rounded version of the specified number, with the specified number of decimals. The default number of decimals is 0, meaning that the function will return the nearest integer.
Round numerical values up and down in Python The built-in round() function rounds values up and down. The math. floor() function rounds down to the next full integer.
The round() function rounds a number to the nearest whole number. The math. ceil() method rounds a number up to the nearest whole number while the math. floor() method rounds a number down to the nearest whole number.
It is simply always showing a fixed number of significant digits. Try import math; p=3.14; print p; p=math. pi; print p .
Could someone tell me if this is an appropriate way to implement this function?
It depends. The given function will behave surprisingly if b
isn't precisely equal to a value that would normally be obtained directly from decimal-to-binary-float conversion.
For example:
>>> print(0.1, 0.2/2, 0.3/3)
0.1 0.1 0.1
>>> couldRoundTo(0.123, 0.1)
True
>>> couldRoundTo(0.123, 0.2/2)
True
>>> couldRoundTo(0.123, 0.3/3)
False
This fails because the calculation of 0.3 / 3
results in a slightly different representation than 0.1
and 0.2 / 2
(and round(0.123, 1)
).
If not, how could I improve it?
Rule of thumb: if your calculation specifically involves decimal digits in any way, just use Decimal
, to avoid all the lossy base-2 round-tripping.
In particular, Decimal
includes a helper called quantize
that makes this problem trivially easy:
from decimal import Decimal
def roundable(a, b):
a = Decimal(str(a))
b = Decimal(str(b))
return a.quantize(b) == b
One way to do it:
def could_round_to(a, b):
(x, y) = map(len, str(b).split('.'))
round_format = "%" + "%d.%df"%(x, y)
return round_format%a == str(b)
First, we take the number of digits before and after the decimal in x and y. Then, we construct a format such as %x.yf
. Then, we supply a
to the format string.
>>> "%2.2f"%123.1234
'123.12'
>>> "%2.2f"%123.1264
'123.13'
>>> "%3.2f"%000.001
'0.00'
Now, all that's left is comparing the strings.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With