I have the Lat/Long value of an small area in Melbourne; -37.803134,145.132377 and also a flat image of that,that I exported from openstreet map( Osmarender Image ). Image width : 1018 and Height:916
I would like to be able to convert, using C++, the Lat/Long to an X,Y coordinate where the point would reflect the location.
I used various formula's that I found in web like one given below but nothing helps.
var y = ((-1 * lat) + 90) * (MAP_HEIGHT / 180);
var x = (lon + 180) * (MAP_WIDTH / 360);
It would be of great help if any one give me clear explanation of how to do this . Any code would be much appreciated.
Calculate latitude and longitude using the formula: latitude = asin (z/R) and longitude = atan2 (y,x). In this formula, we have the values of x, y, z and R from step 2. Asin is arc sin, which is a mathematical function, and atan2 is a variation of the arc tangent function. The symbol * stands for multiplication.
1 Answer. Show activity on this post. For coordinates captured using a GPS, or by any means, longitude is the X value and latitude is the Y value. These are for a geographic coordinate system and have units of degrees.
You need more information than just a single lat/lon pair to be able to do this.
At this stage, the information you have provided is missing two things:
I'm also going to assume that your image is aligned north/south - for example, it doesn't have north pointing towards the top left corner. That would tend to complicate things.
The easiest approach is to figure out exactly what lat/lon coordinates correspond to the (0, 0) pixel and the (1017, 915) pixel. Then you can figure out the pixel corresponding to a given lat/lon coordinate through interpolation.
To briefly outline that process, imagine that your (-37.803134, 145.132377) lat/lon corresponds to your (0, 0) pixel, and that you've discovered that your (1017, 915) pixel corresponds to the lat/lon (-37.798917, 145.138535). Assuming the usual convention with pixel (0, 0) in the bottom-left corner, this means north is up in the image.
Then, if you are interested in the target coordinate (-37.801465, 145.134984), you can figure out the corresponding number of pixels up the image it is as follows:
pixelY = ((targetLat - minLat) / (maxLat - minLat)) * (maxYPixel - minYPixel)
= ((-37.801465 - -37.803134) / (-37.798917 - -37.803134)) * (915 - 0)
= 362.138
That is, the corresponding pixel is 362 pixels from the bottom of the image. You can then do exactly the same for the horizontal pixel placement, but using longitudes and X pixels instead.
The part ((targetLat - minLat) / (maxLat - minLat))
works out how far you are between the two reference coordinates, and gives 0 to indicate that you are at the first one, 1 to indicate the second one, and numbers in between to indicate locations in between. So for example, it would produce 0.25 to indicate that you are 25% of the way north between the two reference coordinates. The last bit converts that into the equivalent pixels.
HTH!
EDIT Ok, based on your comment I can be a little more specific. Given that you seem to be using the top left corner as your primary reference point, I'll use the following definitions:
minLat = -37.803134
maxLat = -37.806232
MAP_HEIGHT = 916
Then, if we use an example coordinate of (-37.804465, 145.134984), the Y coordinate of the corresponding pixel, relative to the top-left corner, is:
pixelY = ((targetLat - minLat) / (maxLat - minLat)) * (MAP_HEIGHT - 1)
= ((-37.804465 - -37.803134) / (-37.806232 - -37.803134)) * 915
= 393.11
Therefore, the corresponding pixel is 393 pixels down from the top. I'll let you work out the horizontal equivalent for yourself - it's basically the same. NOTE The -1
with the MAP_HEIGHT
is because if you start at zero, the maximum pixel number is 915, not 916.
EDIT: One thing I'd like to take the opportunity to point out is that this is an approximation. In reality, there isn't a simple linear relationship between latitude and longitude coordinates and other forms of Cartesian coordinates for a number of reasons including the projections that are used while making maps, and the fact that the Earth is not a perfect sphere. In small areas this approximation is close enough as to make no significant difference, but on larger scales discrepancies may become evident. Depending on your needs, YMMV. (My thanks to uray, whose answer below reminded me that this is the case).
if you're looking for accurate conversion of geodetic (lot,lan) into your defined cartesian coordinate (x,y meters from reference point), you can do with my code snippet here, this function will accept geodetic coordinate in radian and output the result in x,y
input:
code:
#define GD_semiMajorAxis 6378137.000000
#define GD_TranMercB 6356752.314245
#define GD_geocentF 0.003352810664
void geodeticOffsetInv( double refLat, double refLon,
double lat, double lon,
double& xOffset, double& yOffset )
{
double a = GD_semiMajorAxis;
double b = GD_TranMercB;
double f = GD_geocentF;
double L = lon-refLon
double U1 = atan((1-f) * tan(refLat));
double U2 = atan((1-f) * tan(lat));
double sinU1 = sin(U1);
double cosU1 = cos(U1);
double sinU2 = sin(U2);
double cosU2 = cos(U2);
double lambda = L;
double lambdaP;
double sinSigma;
double sigma;
double cosSigma;
double cosSqAlpha;
double cos2SigmaM;
double sinLambda;
double cosLambda;
double sinAlpha;
int iterLimit = 100;
do {
sinLambda = sin(lambda);
cosLambda = cos(lambda);
sinSigma = sqrt((cosU2*sinLambda) * (cosU2*sinLambda) +
(cosU1*sinU2-sinU1*cosU2*cosLambda) *
(cosU1*sinU2-sinU1*cosU2*cosLambda) );
if (sinSigma==0)
{
xOffset = 0.0;
yOffset = 0.0;
return ; // co-incident points
}
cosSigma = sinU1*sinU2 + cosU1*cosU2*cosLambda;
sigma = atan2(sinSigma, cosSigma);
sinAlpha = cosU1 * cosU2 * sinLambda / sinSigma;
cosSqAlpha = 1 - sinAlpha*sinAlpha;
cos2SigmaM = cosSigma - 2*sinU1*sinU2/cosSqAlpha;
if (cos2SigmaM != cos2SigmaM) //isNaN
{
cos2SigmaM = 0; // equatorial line: cosSqAlpha=0 (§6)
}
double C = f/16*cosSqAlpha*(4+f*(4-3*cosSqAlpha));
lambdaP = lambda;
lambda = L + (1-C) * f * sinAlpha *
(sigma + C*sinSigma*(cos2SigmaM+C*cosSigma*(-1+2*cos2SigmaM*cos2SigmaM)));
} while (fabs(lambda-lambdaP) > 1e-12 && --iterLimit>0);
if (iterLimit==0)
{
xOffset = 0.0;
yOffset = 0.0;
return; // formula failed to converge
}
double uSq = cosSqAlpha * (a*a - b*b) / (b*b);
double A = 1 + uSq/16384*(4096+uSq*(-768+uSq*(320-175*uSq)));
double B = uSq/1024 * (256+uSq*(-128+uSq*(74-47*uSq)));
double deltaSigma = B*sinSigma*(cos2SigmaM+B/4*(cosSigma*(-1+2*cos2SigmaM*cos2SigmaM)-
B/6*cos2SigmaM*(-3+4*sinSigma*sinSigma)*(-3+4*cos2SigmaM*cos2SigmaM)));
double s = b*A*(sigma-deltaSigma);
double bearing = atan2(cosU2*sinLambda, cosU1*sinU2-sinU1*cosU2*cosLambda);
xOffset = sin(bearing)*s;
yOffset = cos(bearing)*s;
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With