Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

C# - Inconsistent math operation result on 32-bit and 64-bit

Tags:

c#

Consider the following code:

double v1 = double.MaxValue;
double r = Math.Sqrt(v1 * v1);

r = double.MaxValue on 32-bit machine r = Infinity on 64-bit machine

We develop on 32-bit machine and thus unaware of the problem until notified by customer. Why such inconsistency happens? How to prevent this from happening?

like image 311
david.healed Avatar asked Mar 17 '10 10:03

david.healed


People also ask

What C is used for?

C programming language is a machine-independent programming language that is mainly used to create many types of applications and operating systems such as Windows, and other complicated programs such as the Oracle database, Git, Python interpreter, and games and is considered a programming foundation in the process of ...

What is the full name of C?

In the real sense it has no meaning or full form. It was developed by Dennis Ritchie and Ken Thompson at AT&T bell Lab. First, they used to call it as B language then later they made some improvement into it and renamed it as C and its superscript as C++ which was invented by Dr.

What is C in C language?

What is C? C is a general-purpose programming language created by Dennis Ritchie at the Bell Laboratories in 1972. It is a very popular language, despite being old. C is strongly associated with UNIX, as it was developed to write the UNIX operating system.

Is C language easy?

C is a general-purpose language that most programmers learn before moving on to more complex languages. From Unix and Windows to Tic Tac Toe and Photoshop, several of the most commonly used applications today have been built on C. It is easy to learn because: A simple syntax with only 32 keywords.


2 Answers

The x86 instruction set has tricky floating point consistency issues due the way the FPU works. Internal calculations are performed with more significant bits than can be stored in a double, causing truncation when the number is flushed from the FPU stack to memory.

That got fixed in the x64 JIT compiler, it uses SSE instructions, the SSE registers have the same size as a double.

This is going to byte you when your calculations test the boundaries of floating point accuracy and range. You never want to get close to needing more than 15 significant digits, you never want to get close to 10E308 or 10E-308. You certainly never want to square the largest representable value. This is never a real problem, numbers that represent physical quantities don't get close.

Use this opportunity to find out what is wrong with your calculations. It is very important that you run the same operating system and hardware that your customer is using, high time that you get the machines needed to do so. Shipping code that is only tested on x86 machine is not tested.

The Q&D fix is Project + Properties, Compile tab, Platform Target = x86.


Fwiw, the bad result on x86 is caused by a bug in the JIT compiler. It generates this code:

      double r = Math.Sqrt(v1 * v1);
00000006  fld         dword ptr ds:[009D1578h] 
0000000c  fsqrt            
0000000e  fstp        qword ptr [ebp-8] 

The fmul instruction is missing, removed by the code optimizer in release mode. No doubt triggered by it seeing the value at double.MaxValue. That's a bug, you can report it at connect.microsoft.com. Pretty sure they're not going to fix it though.

like image 194
Hans Passant Avatar answered Oct 19 '22 08:10

Hans Passant


This is a nigh duplicate of

Why does this floating-point calculation give different results on different machines?

My answer to that question also answers this one. In short: different hardware is allowed to give more or less accurate results depending on the details of the hardware.

How to prevent it from happening? Since the problem is on the chip, you have two choices. (1) Don't do any math in floating point numbers. Do all your math in integers. Integer math is 100% consistent from chip to chip. Or (2) require all your customers to use the same hardware as you develop on.

Note that if you choose (2) then you might still have problems; small details like whether a program was compiled debug or retail can change whether floating point calculations are done in extra precision or not. This can cause inconsistent results between debug and retail builds, which is also unexpected and confusing. If your requirement of consistency is more important than your requirement of speed then you'll have to implement your own floating point library that does all its calculations in integers.

like image 26
Eric Lippert Avatar answered Oct 19 '22 10:10

Eric Lippert