I am currently using Python/Numpy to deal with geographical/GPS data (loving it!), and I am facing the recurring task to calculate distances between geographical points defined by a coordinate pair pn = [lon, lat].
I have a function that I use like this: dist = geodistance(p1, p2) which is analog to euclidean distance in linear algebra (vector subtraction/difference), but occurs in geodesical (spherical) space instead of rectangular euclidean space.
Programmatically, euclidean distance is given by
dist = ((p2[0] - p1[0])**2 + (p2[1] - p1[1])**2)**0.5
Mathematically, this is equivalent to the "idiomatic" (for lack of a better word) sentence
dist = p1 - p1   # the "norm" of the vector difference, subtraction.
Currently, I get my distance like this:
p1 = [-51.598354,-29.953363]
p2 = [-51.598701,-29.953045]
dist = geodistance(p1, p2)
print dist
>> 44.3904032407
I would like to do this:
print p2 - p1  # these points now are from some fancy datatype
>> 44.3904032407
And the final goal:
track = numpy.array([[-51.203018 -29.996149]
                     [-51.203018 -29.99625 ]
                     [-51.20266  -29.996229]
                     [-51.20229  -29.996309]
                     [-51.201519 -29.99416 ]], dtype=fancy)  # (**) or something like
print numpy.diff(track)
>> ndarray([[   0.        ]
            [   7.03531252]
            [  39.82663316]
            [  41.50958596]
            [ 172.49825765]])
A similar thing is: if you take two datetime objects and subtract them, the operation returns a timedelta object. I want to subtract two coordinates and get a geodesic distance as the result.
I wonder if a class would work, but dtype (a "subtype" of float32, for example) would help a lot upon array creation from lists (** which is how I read things from xml files).
Thanks a lot!
You can define your own types by creating a class and writing a __add__ or __sub__ method.
For example:
class P(object):
    def __init__(self, lon, lat):
        self.lon = lon
        self.lat = lat
    def __sub__(self, other):
        dist = ((other.lon - self.lon)**2 + (other.lat - self.lat)**2)**0.5
        return dist
Given that you're currently getting the coordinates of your points using the list indexing syntax, you could also implement those:
class P(object):
    def __init__(self, lon, lat):
        self.lon = lon
        self.lat = lat
    def __sub__(self, other):
        dist = ((other[0] - self[0])**2 + (other[1] - self[1])**2)**0.5
        return dist
    def __getitem__(self, key):
        if key == 0:
            return self.lon
        elif key == 1:
            return self.lat
        else:
            raise IndexError
    def __setitem__(self, key, value):
        if key == 0:
            self.lon = value
        elif key == 1:
            self.lat = value
        else:
            raise IndexError
(I realize that the above may not be the most elegant way to do it).
That way, your new class is a drop-in replacement for the lists you're currently using.
The Python documentation contains more information about the double-underscore methods you need to write in order to create your user-defined types. (The information you're looking for starts about half-way down the page)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With