Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Signed zero linux vs windows

i am running a program in c++ on windows and on linux. the output is meant to be identical. i am trying to make sure that the only differences are real differences oppose to working inviorment differences. so far i have taken care of all the differences that can be caused by \r\n differences but there is one thing that i can't seem to figure out.

in the windows out put there is a 0.000 and in linux it is -0.000

does any one know what can it be that is making the difference?

thanx

like image 277
user690936 Avatar asked Dec 04 '11 12:12

user690936


2 Answers

Probably it comes from differences in how the optimizer optimizes some FP calculations (that can be configurable - see e.g. here); in one case you get a value slightly less than 0, in the other slightly more. Both in output are rounded to a 0.000, but they keep their "real" sign.

like image 117
Matteo Italia Avatar answered Sep 19 '22 17:09

Matteo Italia


Since in the IEEE floating point format the sign bit is separate from the value, you have two different values of 0, a positive and a negative one. In most cases it doesn't make a difference; both zeros will compare equal, and they indeed describe the same mathematical value (mathematically, 0 and -0 are the same). Where the difference can be significant is when you have underflow and need to know whether the underflow occurred from a positive or from a negative value. Also if you divide by 0, the sign of the infinity you get depends on the sign of the 0 (i.e. 1/+0.0 give +Inf, but 1/-0.0 gives -Inf). In other words, most probably it won't make a difference for you.

Note however that the different output does not necessarily mean that the number itself is different. It could well be that the value in Windows is also -0.0, but the output routine on Windows doesn't distinguish between +0.0 and -0.0 (they compare equal, after all).

like image 20
celtschk Avatar answered Sep 20 '22 17:09

celtschk