I'm having an issue that I believe has to do with working with floats and precision but I'm not very well versed in the various intricacies involved. I'm a math person and in my mind I might as well still be just working with decimals on a chalkboard. I'll begin studying up on this, but in the mean time, I'm wondering if there are any general techniques for working with floats that might address the problem I'll outline below.
I have a numpy array of decimals that I would like to round to the nearest .02. I originally accomplished this by dividing every element of the array by .02, rounding the result, then multiplying by .02 again. The actual data is generated by some code that process an input, but this demonstrates the problem:
x = np.array([.45632, .69722, .40692])
xx = np.round(x/.02)*.02
It seems to round everything correctly, as I can check:
xx
array([0.46, 0.7, 0.4])
However, if I inspect the first and second element, I get:
xx[0]
0.46000000000000002
xx[1]
0.70000000000000007
Each element in the array is of type numpy.float64. The problem occurs later because I involve these numbers with comparison operators to select subsets of the data and what happens then is a little unpredictable:
xx[0] == .46
True
But,
xx[1] == .70
False
As I said, I have a work around for this particular application, but I'm wondering if anyone has a way to make my first approach work or if there are techniques for dealing with these types of numbers that are more general that I should be aware of.
Rather than using ==
to select subsets of data, try using numpy.isclose(). This allows you to specify a relative/absolute tolerance for your comparison (absolute(a - b) <= (atol + rtol * absolute(b)))
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With