Given:
>>> a,b=2,3
>>> c,d=3,2
>>> def f(x,y): print(x,y)
I have an existing (as in cannot be changed) 2 positional parameter function where I want the positional parameters to always be in ascending order; i.e., f(2,3)
no matter what two arguments I use (f(a,b)
is the same as f(c,d)
in the example)
I know that I could do:
>>> f(*sorted([c,d]))
2 3
Or I could do:
>>> f(*((a,b) if a<b else (b,a)))
2 3
(Note the need for tuple parenthesis in this form because ,
is lower precedence than the ternary...)
Or,
def my_f(a,b):
return f(a,b) if a<b else f(b,a)
All these seem kinda kludgy. Is there another syntax that I am missing?
Edit
I missed an 'old school' Python two member tuple method. Index a two member tuple based on the True == 1, False == 0 method:
>>> f(*((a,b),(b,a))[a>b])
2 3
Also:
>>> f(*{True:(a,b), False:(b,a)}[a<b])
2 3
Edit 2
The reason for this silly exercise: numpy.isclose has the following usage note:
For finite values, isclose uses the following equation to test whether two floating point values are equivalent.
absolute(a - b) <= (atol + rtol * absolute(b))
The above equation is not symmetric in a and b, so that isclose(a, b) might be different from isclose(b, a) in some rare cases.
I would prefer that not happen.
I am looking for the fastest way to make sure that arguments to numpy.isclose
are in a consistent order. That is why I am shying away from f(*sorted([c,d]))
Implemented my solution in case anyone else is looking.
def sort(f):
def wrapper(*args):
return f(*sorted(args))
return wrapper
@sort
def f(x, y):
print(x, y)
f(3, 2)
>>> (2, 3)
Also since @Tadhg McDonald-Jensen mention that you may not be able to change the function yourself that you could wrap the function as such
my_func = sort(f)
You mention that your use-case is np.isclose
. However your approach isn't a good way to solve the real issue. But it's understandable given the poor argument naming of that function - it sort of implies that both arguments are interchangable. If it were: numpy.isclose(measured, expected, ...)
(or something like it) it would be much clearer.
For example if you expect the value 10
and measure 10.51
and you allow for 5% deviation, then in order to get a useful result you must use np.isclose(10.51, 10, ...)
, otherwise you would get wrong results:
>>> import numpy as np
>>> measured = 10.51
>>> expected = 10
>>> err_rel = 0.05
>>> err_abs = 0.0
>>> np.isclose(measured, expected, err_rel, err_abs)
False
>>> np.isclose(expected, measured, err_rel, err_abs)
True
It's clear to see that the first one gives the correct result because the actually measured value is not within the tolerance of the expected value. That's because the relative uncertainty is an "attribute" of the expected value, not of the value you compare it with!
So solving this issue by "sorting" the parameters is just wrong. That's a bit like changing the numerator and denominator for division because the denominator contains zeros and dividing by zero could give NaN
, Inf
, a Warning or an Exception... it definetly avoids the problem but just by giving an incorrect result (the comparison isn't perfect because with division it will almost always give a wrong result; with isclose
it's rare).
This was a somewhat artificial example designed to trigger that behaviour and most of the time it's not important if you use measured, expected
or expected, measured
but in the few cases where it does matter you can't solve it by swapping the arguments (except when you have no "expected" result, but that rarely happens - at least it shouldn't).
There was some discussion about this topic when math.isclose
was added to the python library:
Symmetry (PEP 485)
[...]
Which approach is most appropriate depends on what question is being asked. If the question is: "are these two numbers close to each other?", there is no obvious ordering, and a symmetric test is most appropriate.
However, if the question is: "Is the computed value within x% of this known value?", then it is appropriate to scale the tolerance to the known value, and an asymmetric test is most appropriate.
[...]
This proposal [for
math.isclose
] uses a symmetric test.
So if your test falls into the first category and you like a symmetric test - then math.isclose
could be a viable alternative (at least if you're dealing with scalars):
math.isclose(a, b, *, rel_tol=1e-09, abs_tol=0.0)
[...]
rel_tol is the relative tolerance – it is the maximum allowed difference between a and b, relative to the larger absolute value of a or b. For example, to set a tolerance of 5%, pass
rel_tol=0.05
. The default tolerance is1e-09
, which assures that the two values are the same within about 9 decimal digits. rel_tol must be greater than zero.[...]
Just in case this answer couldn't convince you and you still want to use a sorted
approach - then you should order by the abs
olute of you values (i.e. *sorted([a, b], key=abs)
). Otherwise you might get surprising results when comparing negative numbers:
>>> np.isclose(-10.51, -10, err_rel, err_abs) # -10.51 is smaller than -10!
False
>>> np.isclose(-10, -10.51, err_rel, err_abs)
True
For only two elements in the tuple, the second one is the preferred idiom -- in my experience. It's fast, readable, etc.
No, there isn't really another syntax. There's also
(min(a,b), max(a,b))
... but this isn't particularly superior to the other methods; merely another way of expressing it.
Note after comment by dawg
:
A class with custom comparison operators could return the same object for both min
and max
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With