Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Rounding error with TDateTime on iOS

When calculating a 32bit ID from a timestamp (TDateTime), I get a strange error. In certain situations, the value is different on different processors.

The fTimeStamp field is read from a Double field in a SQLite database. The code below calculates a 32bit ID (lIntStamp) from fTimeStamp, but in some (rare) situations the value is different on different computers even though the source database file is exactly the same (i.e. the Double stored on file is the same).

...
fTimeStamp: TDateTime
...

var
  lIntStamp: Int64;
begin
  lIntStamp := Round(fTimeStamp * 864000); //86400=24*60*60*10=steps of 1/10th second
  lIntStamp := lIntStamp and $FFFFFFFF;
  ...
end;

The precision ofTDateTime (Double) is 15 digits, but the rounded value in the code uses only 11 digits, so there should be enough information to round correctly.

To mention an example of values: in a specific test run the value of lIntStamp was $74AE699B on a Windows computer and $74AE699A on an iPad (= only last bit is different).

Is the Round function implemented different on each platform?

PS. Our target platforms are currently Windows, MacOS and iOS.

Edit:

I made a small test program based on the comments:

var d: Double;
    id: int64 absolute d;
    lDouble: Double;
begin
  id := $40E4863E234B78FC;
  lDouble := d*864000;
  Label1.text := inttostr(Round(d*864000))+' '+floattostr(lDouble)+' '+inttostr(Round(lDouble));
end;

The output on Windows is:

36317325723 36317325722.5 36317325722

On the iPad the output is:

36317325722 36317325722.5 36317325722

The difference is in the first number, which shows the rounding of the intermediate calculation, so the problem happens because x86 has a higher internal precision (80 bit) than the ARM (64 bit).

like image 770
Hans Avatar asked May 06 '15 10:05

Hans


2 Answers

Assuming that all the processors are IEEE754 compliant, and that you are using the same rounding mode in all processors, then you will be able to get the same results from all the different processors.

However, there may be compiled code differences, or implementation differences with your code as it stands.

Consider how

fTimeStamp * 24 * 60 * 60 * 10

is evaluated. Some compilers may perform

fTimeStamp * 24

and then store the intermediate result in a FP register. Then multiply that by 60, and store to a FP register. And so on.

Now, under x86 the floating point registers are 80 bit extended and by default, those intermediate registers will hold the results to 80 bits.

On the other hand the ARM processors don't have 80 registers. The intermediate values are held at 64 bit double precision.

So that's a machine implementation difference that would explain your observed behaviour.

Another possibility is that the ARM compiler spots the constant in the expression and evaluates it at compile time, reducing the above to

fTimeStamp * 864000

I've never seen an x86 or x64 compiler that does that, but perhaps the ARM compiler does. That's a difference in the compiled code. I'm not saying that it happens, I don't know the mobile compilers. But there's no reason why it could not happen.

However, here is your salvation. Re-write your expression as above with that single multiplication. That way you get rid of any scope for intermediate values being stored to different precision. Then, so long as Round means the same thing on all processors, the results will be identical.

Personally I'd avoid questions over rounding mode and instead of Round would use Trunc. I know it has a different meaning, but for your purposes it is an arbitrary choice.

You'd then be left with:

lIntStamp := Trunc(fTimeStamp * 864000); //steps of 1/10th second
lIntStamp := lIntStamp and $FFFFFFFF;

If Round is behaving differently on the different platforms then you may need to implement it yourself on ARM. On x86 the default rounding mode is bankers. That only matters when half way between two integers. So check if Frac(...) = 0.5 and round accordingly. That check is safe because 0.5 is exactly representable.

On the other hand you seem to be claiming that

Round(36317325722.5000008) = 36317325722

on ARM. If so that is a bug. I cannot believe what you claim. I believe that the value passed to Round is in fact 36317325722.5 on ARM. That's the only thing that can make sense to me. I cannot believe Round is defective.

like image 97
David Heffernan Avatar answered Oct 08 '22 14:10

David Heffernan


Just to be complete, here is what is going on:

A call to Round(d*n);, where d is a double and n is a number, will turn the multiplication into an extended value before calling the Round function, on an x86 environment. On a x64 platform or OSX or IOS/Android platform, there is no promotion to an 80 bit extended value.

Analysing the extended values can be tricky, since the RTL has no function to write the full precision of an extended value. John Herbster wrote such a library http://cc.embarcadero.com/Item/19421. (Add FormatSettings in two places to make it compile on a modern Delphi version).

Here is a small test that writes the results of extended and double values in steps of 1 bit change in the input double value.

program TestRound;

{$APPTYPE CONSOLE}

uses
  System.SysUtils,
  ExactFloatToStr_JH0 in 'ExactFloatToStr_JH0.pas';

var
  // Three consecutive double values (binary representation)
  id1 : Int64 = $40E4863E234B78FB;
  id2 : Int64 = $40E4863E234B78FC; // <-- the fTimeStamp value
  id3 : Int64 = $40E4863E234B78FD;
  // Access the values as double
  d1 : double absolute id1;
  d2 : double absolute id2;
  d3 : double absolute id3;
  e: Extended;
  d: Double;
begin
  WriteLn('Extended precision');
  e := d1*864000;
  WriteLn(e:0:8 , ' ', Round(e), ' ',ExactFloatToStrEx(e,'.',#0));
  e := d2*864000;
  WriteLn(e:0:8 , ' ', Round(e),' ', ExactFloatToStrEx(e,'.',#0));
  e := d3*864000;
  WriteLn(e:0:8 , ' ', Round(e),' ', ExactFloatToStrEx(e,'.',#0));
  WriteLn('Double precision');
  d := d1*864000;
  WriteLn(d:0:8 , ' ', Round(d),' ', ExactFloatToStrEx(d,'.',#0));
  d := d2*864000;
  WriteLn(d:0:8 , ' ', Round(d),' ', ExactFloatToStrEx(d,'.',#0));
  d := d3*864000;
  WriteLn(d:0:8 , ' ', Round(d),' ', ExactFloatToStrEx(d,'.',#0));

  ReadLn;
end.

Extended precision
36317325722.49999480 36317325722 +36317325722.499994792044162750244140625
36317325722.50000110 36317325723 +36317325722.500001080334186553955078125
36317325722.50000740 36317325723 +36317325722.500007368624210357666015625
Double precision
36317325722.49999240 36317325722 +36317325722.49999237060546875
36317325722.50000000 36317325722 +36317325722.5
36317325722.50000760 36317325723 +36317325722.50000762939453125

Note that the fTimeStamp value in the question has an exact double representation (ending with .5) when using double precision calculation, while the extended calculation gives a value that is a tiny bit higher. This is the explanation of the different rounding results for the platforms.


As noted in comments, the solution would be to store the calculation in a Double before rounding. This would not solve the backward compatibility problem, which is not easy to accomplish. Perhaps that is a good opportunity to store the time in another format.

like image 33
LU RD Avatar answered Oct 08 '22 15:10

LU RD