Hello I have this code in C#:
float n = 2.99499989f;
MessageBox.Show("n = " + n.ToString("f2", CultureInfo.InvariantCulture));
And this code in C++:
float n = 2.99499989f;
printf("n = %.2f", n);
First one outputs 3.00.
Second one outputs 2.99.
I have no clue why this is happening.
Update:
I also tried Objective-C NSLog and the output is 2.99.
I needed to fix it fast so I used following method:
float n = 2.99499989f;
float round = (float)Math.Round(n, 2);
MessageBox.Show("round = " + round.ToString(CultureInfo.InvariantCulture));
This code shows 2.99, but computes round in double precision. I can't find Math.RoundF.
Using BitConverter.GetBytes
and printing out the actual bytes produced shows that this is not a compiler difference - in both cases, the actual float value stored is 0x403FAE14
, which this handy calculator tells me is the exact value
2.99499988555908203125
The difference therefore must lie in differing behaviours of printf
and ToString
. More than that I cannot immediately say.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With