Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

If two languages follow IEEE 754, will calculations in both languages result in the same answers?

I'm in the process of converting a program from Scilab code to C++. One loop in particular is producing a slightly different result than the original Scilab code (it's a long piece of code so I'm not going to include it in the question but I'll try my best to summarise the issue below).

The problem is, each step of the loop uses calculations from the previous step. Additionally, the difference between calculations only becomes apparent around the 100,000th iteration (out of approximately 300,000).

Note: I'm comparing the output of my C++ program with the outputs of Scilab 5.5.2 using the "format(25);" command. Meaning I'm comparing 25 significant digits. I'd also like to point out I understand how precision cannot be guaranteed after a certain number of bits but read the sections below before commenting. So far, all calculations have been identical up to 25 digits between the two languages.

In attempts to get to the bottom of this issue, so far I've tried:

  1. Examining the data type being used:

I've managed to confirm that Scilab is using IEEE 754 doubles (according to the language documentation). Also, according to Wikipedia, C++ isn't required to use IEEE 754 for doubles, but from what I can tell, everywhere I use a double in C++ it has perfectly match Scilab's results.

  1. Examining the use of transcendental functions:

I've also read from What Every Computer Scientist Should Know About Floating-Point Arithmetic that IEEE does not require transcendental functions to be exactly rounded. With that in mind, I've compared the results of these functions (sin(), cos(), exp()) in both languages and again, the results appear to be the same (up to 25 digits).

  1. The use of other functions and predefined values:

I repeated the above steps for the use of sqrt() and pow(). As well as the value of Pi (I'm using M_PI in C++ and %pi in Scilab). Again, the results were the same.

  1. Lastly, I've rewritten the loop (very carefully) in order to ensure that the code is identical between the two languages.

Note: Interestingly, I noticed that for all the above calculations the results between the two languages match farther than the actual result of the calculations (outside of floating point arithmetic). For example:

Value of sin(x) using Wolfram Alpha = 0.123456789.....

Value of sin(x) using Scilab & C++ = 0.12345yyyyy.....

Where even once the value computed using Scilab or C++ started to differ from the actual result (from Wolfram). Each language's result still matched each other. This leads me to believe that most of the values are being calculated (between the two languages) in the same way. Even though they're not required to by IEEE 754.


My original thinking was one of the first three points above are implemented differently between the two languages. But from what I can tell everything seems to produce identical results.

Is it possible that even though all the inputs to these loops are identical, the results can be different? Possibly because a very small error (past what I can see with 25 digits) is occurring that accumulates over time? If so, how can I go about fixing this issue?

like image 996
Paul Warnick Avatar asked Jun 20 '16 17:06

Paul Warnick


People also ask

Why is floating point arithmetic not exact?

Floating-point decimal values generally do not have an exact binary representation. This is a side effect of how the CPU represents floating point data. For this reason, you may experience some loss of precision, and some floating-point operations may produce unexpected results.

What is the difference between single precision and double precision number?

The simplest way to distinguish between single- and double-precision computing is to look at how many bits represent the floating-point number. For single precision, 32 bits are used to represent the floating-point number. For double precision, 64 bits are used to represent the floating-point number.

What is single precision double precision?

Single Precision: Single Precision is a format proposed by IEEE for the representation of floating-point numbers. It occupies 32 bits in computer memory. 2. Double Precision: Double Precision is also a format given by IEEE for the representation of the floating-point number. It occupies 64 bits in computer memory.


1 Answers

No, the format of the numbering system does not guarantee equivalent answers from functions in different languages.

Functions, such as sin(x), can be implemented in different ways, using the same language (as well as different languages). The sin(x) function is an excellent example. Many implementations will use a look-up table or look-up table with interpolation. This has speed advantages. However, some implementations may use a Taylor Series to evaluate the function. Some implementations may use polynomials to come up with a close approximation.

Having the same numeric format is one hurdle to solve between languages. Function implementation is another.

Remember, you need to consider the platform as well. A program that uses an 80-bit floating point processor will have different results than a program that uses a 64-bit floating point software implementation.

like image 180
Thomas Matthews Avatar answered Sep 22 '22 21:09

Thomas Matthews