Without getting into unnecessary details, is it possible for operations on floating-point numbers (x86_64) to return -however small- variations on their results, based on identical inputs? Even a single bit different?
I am simulating a basically chaotic system, and I expect small variations on the data to have visible effects. However I expected that, with the same data, the behavior of the program would be fixed. This is not the case. I get visible, but acceptable, differences with each run of the program.
I am thinking I have left some variable uninitialized somewhere...
The languages I am using are C++ and Python.
Russell's answer is correct. Floating point ops are deterministic. The non-determinism was caused by a dangling pointer.
Yes, this is possible. Quoting from the C++ FAQ:
Turns out that on some installations, cos(x) != cos(y) even though x == y. That's not a typo; read it again if you're not shocked: the cosine of something can be unequal to the cosine of the same thing. (Or the sine, or the tangent, or the log, or just about any other floating point computation.)
Why?
[F]loating point calculations and comparisons are often performed by special hardware that often contain special registers, and those registers often have more bits than a double. That means that intermediate floating point computations often have more bits than sizeof(double), and when a floating point value is written to RAM, it often gets truncated, often losing some bits of precision.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With