Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Floating point accuracy with different languages

I'm currently doing distance calculations between coordinates and have been getting slightly different results depending on the language used.

Part of the calculation is taking calculating the cosine of a given radian. I get the following results

// cos(0.8941658257446736)

// 0.6261694290123146 node
// 0.6261694290123146 rust
// 0.6261694290123148 go
// 0.6261694290123148 python
// 0.6261694290123148 swift
// 0.6261694290123146 c++
// 0.6261694290123146 java
// 0.6261694290123147 c

I would like to try and understand why. If you look past 16dp c is the only "correct" answer in terms of rounding. What I am surprised as is python having a different result.

This small difference is being amplified currently and over 000's of positions it is adding a not insignificant distance.

Not really sure how this is a duplicate. Also I am more asking for a holistic answer rather than language specific. I don't have a compute science degree.


UPDATE

I accept that maybe this is too broad a question I suppose I was curious as to why as my background isn't CS. I appreciate the links to the blogs that were posted in the comments.


UPDATE 2

This question arose from porting a service from nodejs to go. Go is even weirder as I am now unable to run tests as the summation of distances varies with multiple values.

Given a list of coordinates and calculating the distance and adding them together I get different results. I'm not asking a question but seems crazy that go will produce different results.

9605.795975874069
9605.795975874067
9605.79597587407

For completeness here is the Distance calculation I am using:

func Distance(pointA Coordinate, pointB Coordinate) float64 {
    const R = 6371000 // Earth radius meters
    phi1 := pointA.Lat * math.Pi / 180
    phi2 := pointB.Lat * math.Pi / 180
    lambda1 := pointA.Lon * math.Pi / 180
    lambda2 := pointB.Lon * math.Pi / 180

    deltaPhi := phi2 - phi1
    deltaLambda := lambda2 - lambda1
    a := math.Sin(deltaPhi/2)*math.Sin(deltaPhi/2) + math.Cos(phi1)*math.Cos(phi2)*math.Sin(deltaLambda/2)*math.Sin(deltaLambda/2)
    c := 2 * math.Atan2(math.Sqrt(a), math.Sqrt(1-a))

    d := R * c
    return d
}
like image 501
amwill04 Avatar asked Oct 16 '19 11:10

amwill04


2 Answers

Generally, the representation of floating point numbers is defined by the standard IEEE 754 and my assumption is that this standard is implemented by all (major) programming languages.

Precision and rounding are known issues and may sometimes lead to unexpected results.

Aspects that may have an influence on the result of a calculation depending on the programming language or used math library:

  • different calculation methods (in your case: the cosine function might be implemented by numerical approximation with different approaches)
  • different rounding strategies during calculation or for the final output
like image 122
Gerd Avatar answered Nov 15 '22 01:11

Gerd


IEEE-754 only requires basic operations (+-*/) and sqrt to be correctly rounded, i.e. the error must be no more than 0.5ULP. Transcendental functions like sin, cos, exp... are very complex so they're only recommended to be correctly rounded. Different implementations may use different algorithms to calculate the result for those functions depending on the space and time requirements. Therefore variations like you observed is completely normal

There is no standard that requires faithful rounding of transcendental functions. IEEE-754 (2008) recommends, but does not require, that these functions be correctly rounded.

Standard for the sine of very large numbers

See also

  • Math precision requirements of C and C++ standard
  • If two languages follow IEEE 754, will calculations in both languages result in the same answers?
  • Does any floating point-intensive code produce bit-exact results in any x86-based architecture?
like image 26
phuclv Avatar answered Nov 14 '22 23:11

phuclv