The same code run in VS c++ and MinGW got different result. The result is type of double. Example: in VS c++ got "-6.397745731873350", but in MinGW got "-6.397745731873378". There was litter different. But I don't known why?
I'd hazard a guess that it's one of two possibilities.
Back when Windows NT was new, and they supported porting to other processors (e.g., MIPS and DEC Alpha), MS had a little bit of a problem: the processors all had 64-bit floating point types, but they sometimes generated slightly different results. The DEC Alpha did computation on a 64-bit double as a 64-bit double. The default mode on an x86 was a little different: as you loaded a floating point number, any smaller type was converted to its internal 80-bit extended double format. Then all computation was done in 80-bit precision. Finally, when you stored the value, it was rounded back to 64 bits. This meant two things: first, for single- and double-precision results, the Intel was quite a bit slower. Second, double precision results often differed slightly between the processors.
To fix those "problems", Microsoft set up their standard library to adjust the floating point processor to only use 64-bit precision instead of 80-bit. Even though they've long-since dropped all support for other processors, they still (at least the last time I looked, and I'd be surprised if it's changed) set the floating point processor to only work in 64-bit precision. I haven't checked to be sure, but I'd guess that MingW may leave the floating point processor set to its default 80-bit precision instead.
There's one other possible source of difference: if you were comparing a 32-bit compiler to a 64-bit compiler, you get a different (though still somewhat similar) situation. The 32-bit compilers (both Microsoft and gcc) use the x87-style floating registers and instructions. Microsoft's 64-bit compiler does not use the x87-style floating point though (at least by default). Instead, it uses SSE instructions. I haven't done a lot of testing with this either, but I wouldn't be surprised at all if (again) there's a slight difference between x87 and SSE when it comes to things like guard bits and rounding. I wouldn't expect big differences at all, but would consider some slight differences extremely likely (bordering on inevitable).
Most floating-point numbers cannot be represented accurately by computers. They're approximation. There is a certain degree of unreliability in their representation. Different compilers may implement the unreliability differently. That is why you see those diffferences.
Read this excellent article:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With