I have a legacy map viewer application using WinForms. It is sloooooow. (The speed used to be acceptable, but Google Maps, Google Earth came along and users got spoiled. Now I am permitted to make if faster :)
After doing all the obvious speed improvements (caching, parallel execution, not drawing what does not need to be drawn, etc), my profiler shows me that the real choking point is the coordinate transformations when converting points from map-space to screen-space. Normally a conversion code looks like this:
public Point MapToScreen(PointF input)
{
// Note that North is negative!
var result = new Point(
(int)((input.X - this.currentView.X) * this.Scale),
(int)((input.Y - this.currentView.Y) * this.Scale));
return result;
}
The real implementation is trickier. Latitudes/longitues are represented as integers. To avoid loosing precision, they are multiplied up by 2^20 (~ 1 million). This is how a coordinate is represented.
public struct Position
{
public const int PrecisionCompensationPower = 20;
public const int PrecisionCompensationScale = 1048576; // 2^20
public readonly int LatitudeInt; // North is negative!
public readonly int LongitudeInt;
}
It is important that the possible scale factors are also explicitly bound to power of 2. This allows us to replace the multiplication with a bitshift. So the real algorithm looks like this:
public Point MapToScreen(Position input)
{
Point result = new Point();
result.X = (input.LongitudeInt - this.UpperLeftPosition.LongitudeInt) >>
(Position.PrecisionCompensationPower - this.ZoomLevel);
result.Y = (input.LatitudeInt - this.UpperLeftPosition.LatitudeInt) >>
(Position.PrecisionCompensationPower - this.ZoomLevel);
return result;
}
(UpperLeftPosition representents the upper-left corner of the screen in the map space.) I am thinking now of offloading this calculation to the GPU. Can anyone show me an example how to do that?
We use .NET4.0, but the code should preferably run on Windows XP, too. Furthermore, libraries under GPL we cannot use.
I suggest you look at using OpenCL and Cloo to do this - take a look at the vector add example and then change this to map the values by using two ComputeBuffer
s (one for each of LatitudeInt
and LongitudeInt
in each point) to 2 output ComputeBuffer
s. I suspect the OpenCL code would looks something like this:
__kernel void CoordTrans(__global int *lat,
__global int *lon,
__constant int ulpLat,
__constant int ulpLon,
__constant int zl,
__global int *outx,
__global int *outy)
{
int i = get_global_id(0);
const int pcp = 20;
outx[i] = (lon[i] - ulpLon) >> (pcp - zl);
outy[i] = (lat[i] - ulpLat) >> (pcp - zl);
}
but you would do more than one coord-transform per core. I need to rush off, I recommend you read up on opencl before doing this.
Also, if the number of coords is reasonable (<100,000/1,000,000) the non-gpu based solution will likely be faster.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With