Can any of you explain why does this happen?
static void Main() { const float xScaleStart = 0.5f; const float xScaleStop = 4.0f; const float xScaleInterval = 0.1f; const float xScaleAmplitude = xScaleStop - xScaleStart; const float xScaleSizeC = xScaleAmplitude / xScaleInterval; float xScaleSize = xScaleAmplitude / xScaleInterval; Console.WriteLine(">const float {0}, (int){1}", xScaleSizeC, (int)xScaleSizeC); Console.WriteLine("> float {0}, (int){1}", xScaleSize, (int)xScaleSize); Console.ReadLine(); }
Output:
>const float 35, (int)34 > float 35, (int)35
I know that the binary representation of 0.1 is actually 0.09999990463256835937, though why does this happen using 'const float' and not with 'float'? Is this considered a compiler bug?
For the record, the code compiles into:
private static void Main(string[] args) { float xScaleSize = 35f; Console.WriteLine(">const float {0}, (int){1}", 35f, 34); Console.WriteLine("> float {0}, (int){1}", xScaleSize, (int)xScaleSize); Console.ReadLine(); }
Since a float is bigger than int, you can convert a float to an int by simply down-casting it e.g. (int) 4.0f will give you integer 4. By the way, you must remember that typecasting just get rid of anything after the decimal point, they don't perform any rounding or flooring operation on the value.
A float value can be converted to an int value no larger than the input by using the math. floor() function, whereas it can also be converted to an int value which is the smallest integer greater than the input using math.
A "floating-point constant" is a decimal number that represents a signed real number. The representation of a signed real number includes an integer portion, a fractional portion, and an exponent. Use floating-point constants to represent floating-point values that can't be changed.
The "Why" of this will basically boil down to the fact that frequently, when working with float
data, an internal representation may be used that has more precision than is specified for float
or double
. This is explicitly catered for in the Virtual Execution System (VES) Spec (section 12 of Partition I):
floating-point numbers are represented using an internal floating-point type. In each such instance, the nominal type of the variable or expression is either
float32
orfloat64
, but its value can be represented internally with additional range and/or precision
And then later we have:
The use of an internal representation that is wider than
float32
orfloat64
can cause differences in computational results when a developer makes seemingly unrelated modifications to their code, the result of which can be that a value is spilled from the internal representation (e.g., in a register) to a location on the stack.
Now, according to the C# language specification:
The compile-time evaluation of constant expressions uses the same rules as run-time evaluation of non-constant expressions, except that where run-time evaluation would have thrown an exception, compile-time evaluation causes a compile-time error to occur.
But as we observe above, the rules actually allow more precision to be used at times, and when this enhanced precision is used isn't actually under our direct control.
And obviously, in different circumstances, the results could have been precisely the opposite of what you observed - the compiler may have dropped to lower precision and the runtime could have maintained higher precision instead.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With