Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

To what accuracy are the local variables displayed in the Embarcadero RAD Studio XE2 debugger? Apparently 1 is not equal to 1

Take the following record:

TVector2D = record
  public
    class operator Equal(const V1, V2: TVector2D): Boolean;
    class operator Multiply(const D: Accuracy; const V: TVector2D): TVector2D;
    class operator Divide(const V: TVector2D; const D: Accuracy): TVector2D;
    class function New(const x, y: Accuracy): TVector2D; static;
    function Magnitude: Accuracy;
    function Normalised: TVector2D;
  public
    x, y: Accuracy;
  end;

With the methods defined as:

class operator TVector2D.Equal(const V1, V2: TVector2D): Boolean;
  var
    A, B: Boolean;
  begin
    Result := (V1.x = V2.x) and (V1.y = V2.y);
  end;

  class operator TVector2D.Multiply(const D: Accuracy; const V: TVector2D): TVector2D;
  begin
    Result.x := D*V.x;
    Result.y := D*V.y;
  end;

  class operator TVector2D.Divide(const V: TVector2D; const D: Accuracy): TVector2D;
  begin
    Result := (1.0/D)*V;
  end;

  class function TVector2D.New(const x, y: Accuracy): TVector2D;
  begin
    Result.x := x;
    Result.y := y;
  end;

  function TVector2D.Magnitude;
  begin
    RESULT := Sqrt(x*x + y*y);
  end;

  function TVector2D.Normalised: TVector2D;
  begin
    Result := Self/Magnitude;
  end;

and a constant:

  const
    jHat2D : TVector2D = (x: 0; y: 1);

I would expect the Boolean value of (jHat2D = TVector2D.New(0,0.707).Normalised) to be True. Yet it comes out as False.

In the debugger TVector2D.New(0,0.707).Normalised.y shows as exactly 1.

http://i.imgur.com/9XhQqgD.png

It cannot be the case that this is exactly 1, otherwise the Boolean value of (jHat2D = TVector2D.New(0,0.707).Normalised) would be True.

Any ideas?

Edit

Accuracy is a Type defined as: Accuracy = Double

like image 875
Trojanian Avatar asked Apr 30 '15 16:04

Trojanian


1 Answers

Assuming that Accuracy is a synonym for a Double type, this is a bug in the visualization of floating point values by the debugger. Due to the inherent problems with internal representation of floating points, v1.Y and v2.Y have very slightly different values, though both approximate to 1.

Add watches for v1.y and v2.y. Ensure that these watch values are configured to represent as "Floating Point" values with Digits set to 18 for maximum detail.

At your breakpoint you will see that:

v1.y      = 1
v2.y      = 0.999999999999999889

(whosrdaddy provided the above short version in the comments on the question, but I am retaining the long form of my investigation - see below the line after Conclusion - as it may prove useful in other, similar circumstances as well as being of potential interest)

Conclusion

Whilst the debugger visualizations are strictly speaking incorrect (or at best misleading), they are never-the-less very almost correct. :)

The question then is whether you require strict accuracy or accuracy to within a certain tolerance. If the latter then you can adopt the use of SameValue() with an EPSILON defined suitable to the degree of accuracy you require.

Otherwise you must accept that when debugging your code you cannot rely on the debugger to represent the values involved in your debugging to the degree of accuracy relied on in the code itself.

Option: Customise the Debug Visualization Itself

Alternatively you may wish to investigate creating a custom debug visualisation for your TVector2D type to represent your x/y values to the accuracy employed in your code.

For such a visualization using FloatToStr(), use Format() with a %f format specifier with a suitable number of decimal places. e.g. the below call yields the result obtained by watching the variable as described above:

Format('%.18f', [v2.y]);

// Yields  0.999999999999999889

Long Version of Original Investigation

I modified the Equal operator to allow me to inspect the internal representation of the two values v1.y and v2.y:

type
  PAccuracy = Accuracy;

class operator TVector2D.Equal(const V1, V2: TVector2D): Boolean;
var
  A, B: Boolean;
  ay, by: PAccuracy;
begin
  ay := @V1.y;
  by := @V2.y;

  A := (V1.x = V2.x);
  B := (V1.y = V2.y);

  result := A and B;
end;

By setting watches in the debugger to provide a Memory Dump of ay^ and by^ we see that the two values are represented internally very differently:

v1.y   : $3f f0 00 00 00 00 00 00
v2.y   : $3f ef ff ff ff ff ff ff

NOTE: Byte order is reversed in the watch value results, as compared to the actual values above, due to the Little Endian nature of Intel.

We can then test the hypothesis by passing Doubles with these internal representations into FloatToStr():

var
  a: Double;
  b: Double;
  ai: Int64 absolute a;
  bi: Int64 absolute b;

begin
  ai := $3ff0000000000000;
  bi := $3fefffffffffffff;

  s := FloatToStr(a) + ' = ' + FloatToStr(b);

  // Yields 's' = '1 = 1';
end;

We can conclude therefore that the evaluation of B is correct. v1.y and v2.y are different. The representation of the Double values by the debugger is incorrect (or at best misleading).

By changing the expression for B to use SameValue() we can determine the deviation between the values involved:

uses
  Math;

const
  EPSILON = 0.1;

B := SameValue(V1.y, V2.y, EPSILON);

By progressively reducing the value of EPSILON we find that v1.y and v2.y differ by an amount less than 0.000000000000001 since:

EPSILON = 0.000000000000001;   // Yields B = TRUE
EPSILON = 0.0000000000000001;  // Yields B = FALSE
like image 83
Deltics Avatar answered Sep 28 '22 15:09

Deltics