Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why does this floating-point calculation give different results on different machines?

Tags:

c#

I have a simple routine which calculates the aspect ratio from a floating point value. So for the value 1.77777779, the routine returns the string "16:9". I have tested this on my machine and it works fine.

The routine is given as :

    public string AspectRatioAsString(float f)
    {
        bool carryon = true;
        int index = 0;
        double roundedUpValue = 0;
        while (carryon)
        {
            index++;
            float upper = index * f;

            roundedUpValue = Math.Ceiling(upper);

            if (roundedUpValue - upper <= (double)0.1 || index > 20)
            {
                carryon = false;
            }
        }

        return roundedUpValue + ":" + index;
    }

Now on another machine, I get completely different results. So on my machine, 1.77777779 gives "16:9" but on another machine I get "38:21".

like image 220
JD. Avatar asked Feb 26 '10 14:02

JD.


People also ask

Why are floating point calculations so inaccurate?

Floating-point decimal values generally do not have an exact binary representation. This is a side effect of how the CPU represents floating point data. For this reason, you may experience some loss of precision, and some floating-point operations may produce unexpected results.

What is the main problem with floating point representation?

Accuracy in floating point representation is governed by number of significant bits, whereas range is limited by exponent. Not all real numbers can exactly be represented in floating point format.

Why are floating point calculations inaccurate in Python?

The floating-point calculations are inaccurate because mainly the rationals are approximating that cannot be represented finitely in base 2 and in general they are approximating numbers which may not be representable in finitely many digits in any base.

Do inaccuracies in floating point arithmetic cause problems?

The Problem Since real numbers cannot be represented accurately in a fixed space, when operating with floating-point numbers, the result might not be able to be fully represented with the required precision. This inaccuracy ends up as information lost.


2 Answers

I wouldn't use floating point numbers unless I really had to. They're too prone to this sort of thing due to rounding errors.

Can you change the code to work in double precision? (decimal would be overkill). If you do this, does it give more consistent results?

As to why it's different on different machines, what are the differences between the two machines?

  • 32 bit vs 64 bit?
  • Windows 7 vs Vista vs XP?
  • Intel vs AMD processor? (thanks Oded)

Something like this might be the cause.

like image 116
ChrisF Avatar answered Oct 07 '22 12:10

ChrisF


Here's an interesting bit of the C# specifiction, from section 4.1.6:

Floating-point operations may be performed with higher precision than the result type of the operation. For example, some hardware architectures support an “extended” or “long double” floating-point type with greater range and precision than the double type, and implicitly perform all floating-point operations using this higher precision type. Only at excessive cost in performance can such hardware architectures be made to perform floating-point operations with less precision, and rather than require an implementation to forfeit both performance and precision, C# allows a higher precision type to be used for all floating-point operations. Other than delivering more precise results, this rarely has any measurable effects.

It is possible that this is one of the "measurable effects" thanks to that call to Ceiling. Taking the ceiling of a floating point number, as others have noted, magnifies a difference of 0.000000002 by nine orders of magnitude because it turns 15.99999999 into 16 and 16.00000001 into 17. Two numbers that differ slightly before the operation differ massively afterwards; the tiny difference might be accounted for by the fact that different machines can have more or less "extra precision" in their floating point operations.

Some related issues:

  • C# XNA Visual Studio: Difference between "release" and "debug" modes?

  • CLR JIT optimizations violates causality?

To address your specific problem of how to compute an aspect ratio from a float: I'd possibly solve this a completely different way. I'd make a table like this:

struct Ratio
{
    public int X { get; private set; }
    public int Y { get; private set; }
    public Ratio (int x, int y) : this()
    {
        this.X = x;
        this.Y = y;
    }
    public double AsDouble() { return (double)X / (double)Y; }
}

Ratio[] commonRatios = { 
   new Ratio(16, 9),
   new Ratio(4, 3), 
   // ... and so on, maybe the few hundred most common ratios here. 
   // since you are pinning results to be less than 20, there cannot possibly
   // be more than a few hundred.
};

and now your implementation is

public string AspectRatioAsString(double ratio)      
{ 
    var results = from commonRatio in commonRatios
                  select new {
                      Ratio = commonRatio, 
                      Diff = Math.Abs(ratio - commonRatio.AsDouble())};

    var smallestResult = results.Min(x=>x.Diff);

    return String.Format("{0}:{1}", smallestResult.Ratio.X, smallestResult.Ratio.Y);
}

Notice how the code now reads very much like the operation you are trying to perform: from this list of common ratios, choose the one where the difference between the given ratio and the common ratio is minimized.

like image 31
Eric Lippert Avatar answered Oct 07 '22 13:10

Eric Lippert