Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Double multiplication in c#

I have a problem with a simple multiplication that I can not understand... I am working with .Net Framework 4, and building in x86. I am executing the following code:

double x = 348333.673899683;
double y = 4521014.98461396;
double aux = x * y;

The expected value for aux is 1574821759346,09949827752137468 (I have done it with a simple calculator). However, the value I obtain in aux is 1574821822464. See that is not a precision error, even the integer part has been changed.

If I put a break point in the multiplication and hover the mouse over de * operator, I see x * y = 1574821759346.0994 which its ok. If I hover over the aux variable I see aux = 1574821822464

In order to clarify the last paragraph, two pictures can be seen below:

enter image description here

enter image description here

First, I have thought that maybe is because the x86 compilation, but reading the next post, I discard this option:

The Double Byte Size in 32 bit and 64 bit OS

I can not understand what is happening here. Any help will be appreciated.

---EDIT WITH MORE INFO---

I am using VS2015. I have added three more lines to debug it:

log.Info(x);
log.Info(y);
log.Info(aux);

To show the logs I am using the library log4net. The ouput is:

23322 [8] INFO Art.Model.Scenarios (null) - 348333,673899683
24745 [8] INFO Art.Model.Scenarios (null) - 4521014,98461396
26274 [8] INFO Art.Model.Scenarios (null) - 1574821822464

So it is not a bug in the debuger. If I create a completely new project and solution it works ok, but I can not understand why is not working in this solution.

---SECOND EDIT---

Thanks to the comments I have tried something new:

double x = 348333.673899683;
double y = 4521014.98461396;
double aux = x * y;

decimal xx = 348333.673899683m;
decimal yy = 4521014.98461396m;
decimal auxx = xx * yy;

log.Info(x);
log.Info(y);
log.Info(aux);

log.Info(xx);
log.Info(yy);
log.Info(auxx);

And the result is:

16129 [8] INFO Art.Model.Scenarios (null) - 348333,673899683
16145 [8] INFO Art.Model.Scenarios (null) - 4521014,98461396
16145 [8] INFO Art.Model.Scenarios (null) - 1574821822464
16145 [8] INFO Art.Model.Scenarios (null) - 348333,673899683
16145 [8] INFO Art.Model.Scenarios (null) - 4521014,98461396
16145 [8] INFO Art.Model.Scenarios (null) - 1574821759346,0994982775213747

So it works with the decimal but not with the double. Can someone explain this? I cannot understand why is happening.

like image 897
Ignacio Avatar asked May 25 '16 15:05

Ignacio


1 Answers

Most likely, in case you are using DirectX (the only reason I can find behind your problematic), this problem seems related to the fact that every time a Device is created and/or handled, it forces the FPU to single precision mode, thus losing accuracy and causing double, long, decimal variables to get truncated. If I try a IEEE-754 floating point converter and input your data, I get this result, which is precisely your case: your data, at some point, was read as a double-precision number, but then it got truncated into a single.precision floating point number as you can see:

enter image description here

This problem can be solved by explicitly building the Device object under the flag FpuPreserve.

I also had this very problem, and in the beginning though about incorrect casting or so, until after a long trace found out the values got truncated after I built a DirectX Device object.

like image 182
Enrique Arauz Avatar answered Oct 06 '22 05:10

Enrique Arauz