Im currently reading Code Complete by Steve McConnell, specifically page 295 on floating-point numbers.
When i ran the following code:
double nominal = 1.0;
double sum = 0.0;
for (int i = 0; i < 10; i++)
{
sum += 0.1;
Console.WriteLine("sum: " + sum.ToString());
}
if (equals(nominal,sum))
{
Console.WriteLine("Numbers are the same");
}
else
{
Console.WriteLine("Numbers are different");
}
I got a print out of 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Numbers are different
How come I didn't get the output that is suppose to happen? ie: 0.1 0.2 0.30000000000000004 0.4 0.5 0.6 0.7 0.79999999999999999 0.89999999999999999 0.99999999999999999 Numbers are different
Is C# rounding numbers when I do an implicit conversion from double to string? I think so because when i debug the application and step through the for loop, i can see the non-terminating repeating decimal numbers. What do you think? Am i right or wrong?
Step 1 : In above program, value of “i” is incremented from 0 to 1 using pre-increment operator. Step 2 : This incremented value “1” is compared with 5 in while expression. Step 3 : Then, this incremented value “1” is assigned to the variable “i”.
C – Increment/decrement Operators. Prev Next. Increment operators are used to increase the value of the variable by one and decrement operators are used to decrease the value of the variable by one in C programs.
Pre-increment operator: A pre-increment operator is used to increment the value of a variable before using it in a expression. In the Pre-Increment, value is first incremented and then used inside the expression.
Below table will explain the difference between pre/post increment and decrement operators in C programming language. Step 1 : In above program, value of “i” is incremented from 0 to 1 using pre-increment operator. Step 2 : This incremented value “1” is compared with 5 in while expression.
double.ToString() uses the general format which defaults to 15 decimal places if the precision is not specified. So it does do a little rounding which is why you're seeing what you're seeing. For example 0.89999999999999999 which you specified in your question is 17 decimal places. You could actually see the entire number by doing sum.ToString("g17")
.
You can find .net's Standard Numeric Format Strings and their default precision here: http://msdn.microsoft.com/en-us/library/dwhawy9k(VS.80).aspx
It is is in the ToString default behaviour. If you look at sum in the debugger you get this, which shows you the value without routing through ToString:
0.0
0.1
0.2
0.30000000000000004
0.4
0.5
0.6
0.7
0.79999999999999993
0.89999999999999991
Indicating that the underlying behaviour is as you would expect.
hth
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With