Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Precision issues with Visual Studio 2010

I have an application written in Microsoft Visual C++ 6.0. Now I have rewritten the application in Visual Studio 2010 using C#, but the results are not matching because of precision problems. One of such precision issues is the following one.

float a = 1.0f;

float b = 3.0f;

float c = a / b;

This is C# code when run in Visual studio 2010 gives c value = 0.333333343

But the same code, removing f after the value in the value definition, when run on Visual C++ 6.0 gives c value = 0.333333.

Can anybody sort it out and explain the way to have the same value for c in visual Studio as well as in Visual C++ 6.0??


Actually the values are taken from the watch window. I came to know that different versions of visual studio may differ in floating point format representation. Hence the values in watch may not be useful. This is the reason why I have printed the values in both visual studio versions and the results are as follows. with visual studio 6.0 using visual c++ language it is 0.333333(six 3's)

but with visual studio 10 using C# language it is 0.3333333(seven 3's)

So can anybody help me to make my C# program to produce the same result as visual C++??? (i.e how can i make floating operations to produce the same results on both the versions???)

like image 252
Mahesh Avatar asked Nov 28 '22 09:11

Mahesh


2 Answers

C# is simply displaying fewer decimal places. 0.333333343 rounded to six significant figures is 0.333333. The underlying value of c is the same.

Of course, if you want more precision, you can always use double variables.

like image 21
TonyK Avatar answered Dec 09 '22 13:12

TonyK


Given that the exact value is 0.3 recurring, neither of them is "correct" - and if you're trying to match exact results of binary floating point calculations, that's generally a bad idea to start with due to the way they work. (See my article on binary floating point in .NET for some more information.)

It's possible that you shouldn't be using binary floating point in the first place (e.g. if your values represent exact, artificial amounts such as money). Alternatively, it's possible that you should only be doing equality comparisons with a particular tolerance.

It's also possible that C# and C are producing the exact same bit pattern - but you're seeing different results because of how those values are being formatted. Again, I wouldn't use the text representation of numbers for comparisons.

like image 69
Jon Skeet Avatar answered Dec 09 '22 15:12

Jon Skeet