I have (n,2) numpy array which contains the coordination of n points now I want to sort them based on approximation of each element to the specific point (x,y) and pick the closest one. How can I achieve this?
Right now I have:
def find_nearest(array,value):
xlist = (np.abs(array[:, 0]-value[:, 0]))
ylist = (np.abs(array[:, 1]-value[:, 1]))
newList = np.vstack((xlist,ylist))
// SORT NEW LIST and return the 0 elemnt
In my solution I need to sort newList based on Proximity to (0,0) and I don't know how? Any solution for this or any other solution?
My array of points looks like:
array([[ 0.1648, 0.227 ],
[ 0.2116, 0.2472],
[ 0.78 , 0.546 ],
[ 0.9752, 1. ],
[ 0.384 , 0.4862],
[ 0.4428, 0.2204],
[ 0.4448, 0.4146],
[ 0.1046, 0.2658],
[ 0.5668, 0.7792],
[ 0.1664, 0.0746],
[ 0.5636, 0.6372],
[ 0.7822, 0.5536],
[ 0.7718, 0.8276],
[ 0.9916, 1. ],
[ 0. , 0. ],
[ 0.8206, 0.817 ],
[ 0.4858, 0.4652],
[ 0. , 0. ],
[ 0.1574, 0.3114],
[ 0. , 0.0022],
[ 0.874 , 0.714 ],
[ 0.148 , 0.6624],
[ 0.0656, 0.5912],
[ 0.1148, 0.607 ],
[ 0.069 , 0.6296]])
Sorting to find the nearest point is not a good idea. If you want the closest point then just find the closest point instead, sorting for that is overkilling.
def closest_point(arr, x, y):
dist = (arr[:, 0] - x)**2 + (arr[:, 1] - y)**2
return arr[dist.argmin()]
Moreover if you need to repeat the search many times with a fixed or quasi fixed set of points there are specific data structures that can speed up this kind of query a lot (the search time becomes sub-linear).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With