I've been testing this code at https://dotnetfiddle.net/:
using System;
public class Program
{
const float scale = 64 * 1024;
public static void Main()
{
Console.WriteLine(unchecked((uint)(ulong)(1.2 * scale * scale + 1.5 * scale)));
Console.WriteLine(unchecked((uint)(ulong)(scale* scale + 7)));
}
}
If I compile with .NET 4.7.2 I get
859091763
7
But if I do Roslyn or .NET Core, I get
859091763
0
Why does this happen?
My conclusions were incorrect. See the update for more details.
Looks like a bug in the first compiler you used. Zero is the correct result in this case. The order of operations dictated by the C# specification is as follows:
scale
by scale
, yielding a
a + 7
, yielding b
b
to ulong
, yielding c
c
to uint
, yielding d
The first two operations leave you with a float value of
b = 4.2949673E+09f
. Under standard floating-point arithmetic, this is
4294967296
(you can check it
here). That
fits into ulong
just fine, so c = 4294967296
, but it's exactly one more than
uint.MaxValue
, so it round-trips to 0
, hence d = 0
. Now, surprise surprise, since floating-point arithmetic is
funky, 4.2949673E+09f
and 4.2949673E+09f + 7
is the exact same
number in IEEE 754. So scale * scale
will give you the same value
of a float
as scale * scale + 7
, a = b
, so the second operations is basically a no-op.
The Roslyn compiler performs (some) const operations at compile-time, and optimises this entire expression to 0
. Again, that's the correct result, and the compiler is allowed to perform any optimisations that will result in the exact same behaviour as the code without them.
My guess is that the .NET 4.7.2 compiler you used also tries to optimise this away, but has a bug that causes it to evaluate the cast in a wrong place. Naturally, if you first cast scale
to an uint
and then perform the operation, you get 7
, because scale * scale
round-trips to 0
and then you add 7
. But that is inconsistent with the result you would get when evaluating the expressions step-by-step at runtime. Again, the root cause is just a guess when looking at the produced behaviour, but given everything I've stated above I'm convinced this is a spec violation on the side of the first compiler.
I have done a goof. There's this bit of the C# specification that I didn't know existed when writing the above answer:
Floating-point operations may be performed with higher precision than the result type of the operation. For example, some hardware architectures support an "extended" or "long double" floating-point type with greater range and precision than the double type, and implicitly perform all floating-point operations using this higher precision type. Only at excessive cost in performance can such hardware architectures be made to perform floating-point operations with less precision, and rather than require an implementation to forfeit both performance and precision, C# allows a higher precision type to be used for all floating-point operations. Other than delivering more precise results, this rarely has any measurable effects. However, in expressions of the form x * y / z, where the multiplication produces a result that is outside the double range, but the subsequent division brings the temporary result back into the double range, the fact that the expression is evaluated in a higher range format may cause a finite result to be produced instead of an infinity.
C# guarantees operations to provide a level of precision at least on the level of IEEE 754, but not necessarily exactly that. It's not a bug, it's a spec feature. The Roslyn compiler is in its right to evaluate the expression exactly as IEEE 754 specifies, and the other compiler is in its right to deduce that 2^32 + 7
is 7
when put into uint
.
I'm sorry for my misleading first answer, but at least we've all learned something today.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With