(This is not exactly the problem that I have, but it's isomorphic, and I think that this explanation will be easiest for others to understand.)
Suppose that I have a set of points in an n-dimensional space. Using 3 dimensions for example:
A : [1,2,3]
B : [4,5,6]
C : [7,8,9]
I also have a set of vectors that describe possible movements in this space:
V1 : [+1,0,-1]
V2 : [+2,0,0]
Now, given a point dest, I need to find a starting point p and a set of vectors moves that will bring me to dest in the most efficient manner. Efficiency is defined as "fewest number of moves", not necessarily "least linear distance": it's permissible to select a p that's further from dest than other candidates if the move set is such that you can get there in fewer moves. The vectors in moves must be a strict subset of the available vectors; you can't use the same vector more than once unless it appears more than once in the input set.
My input contains ~100 starting points and maybe ~10 vectors, and my number of dimensions is ~20. The starting points and available vectors will be fixed for the lifetime of the app, but I'll be finding paths for many, many different dest points. I want to optimize for speed, not memory. It's acceptable for the algorithm to fail (to find no possible paths to dest).
I adopted a solution very similar to the one marked below as "accepted". I iterate over all points and vectors and build a list of all reachable points with the routes to reach them. I convert this list into a hash of <dest, p+vectors>, selecting the shortest set of vectors for each destination point. (There is also a little bit of optimization for hash size, which isn't relevant here.) Subsequent dest lookups happen in constant time.
Actually, considering that you have around 10 vectors, you can, for a given dest point, calculate only the 1024 "targets" from the subset of vectors -- e.g every reachable space, with the information about what set of moves gets there. That might be "slow", or "fast" depending on context (it's be absurdly fast if implemented on a parallel computing device like the GPU).
Having all the sets that get there you can calculate the paths a lot quicker, then you can pick the point from which to get to dest in the fewest moves, choosing from the subset of the ones that are either your query or further.
(thanks to Strilanc)
I believe you would be able to make a generalized application of the A* (aka A star) path finding algorithm. There is no reason why it can't be done in Nth space. It gaurantees the most optimal path, if you can describe the cost of each move.
http://en.wikipedia.org/wiki/A*_search_algorithm
So you want to find a subset of your set of vectors such that the subset sums to a given value. In 1 dimension this is called the subset sum problem, and is NP-Complete.
Luckily you only have ~10 vectors, so your problem size is actually quite small and tractable. Start by just trying all 2^10 move combinations for each starting point and picking the best one. Then work from there looking for simple optimizations.
Some easy optimizations that might work:
Given that you have the starting points and a fixed set of vectors, can you calculate the list of all reachable destinations and then just look up a given destination?
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With