%d stands for decimal and it expects an argument of type int (or some smaller signed integer type that then gets promoted). Floating-point types float and double both get passed the same way (promoted to double ) and both of them use %f .
%lf is the correct format specifier for double .
%s refers to a string %d refers to an integer %c refers to a character. Therefore: %s%d%s%c\n prints the string "The first character in sting ", %d prints i, %s prints " is ", and %c prints str[0].
The problem is that .NET will always round a double
to 15 significant decimal digits before applying your formatting, regardless of the precision requested by your format and regardless of the exact decimal value of the binary number.
I'd guess that the Visual Studio debugger has its own format/display routines that directly access the internal binary number, hence the discrepancies between your C# code, your C code and the debugger.
There's nothing built-in that will allow you to access the exact decimal value of a double
, or to enable you to format a double
to a specific number of decimal places, but you could do this yourself by picking apart the internal binary number and rebuilding it as a string representation of the decimal value.
Alternatively, you could use Jon Skeet's DoubleConverter
class (linked to from his "Binary floating point and .NET" article). This has a ToExactString
method which returns the exact decimal value of a double
. You could easily modify this to enable rounding of the output to a specific precision.
double i = 10 * 0.69;
Console.WriteLine(DoubleConverter.ToExactString(i));
Console.WriteLine(DoubleConverter.ToExactString(6.9 - i));
Console.WriteLine(DoubleConverter.ToExactString(6.9));
// 6.89999999999999946709294817992486059665679931640625
// 0.00000000000000088817841970012523233890533447265625
// 6.9000000000000003552713678800500929355621337890625
Digits after decimal point
// just two decimal places
String.Format("{0:0.00}", 123.4567); // "123.46"
String.Format("{0:0.00}", 123.4); // "123.40"
String.Format("{0:0.00}", 123.0); // "123.00"
// max. two decimal places
String.Format("{0:0.##}", 123.4567); // "123.46"
String.Format("{0:0.##}", 123.4); // "123.4"
String.Format("{0:0.##}", 123.0); // "123"
// at least two digits before decimal point
String.Format("{0:00.0}", 123.4567); // "123.5"
String.Format("{0:00.0}", 23.4567); // "23.5"
String.Format("{0:00.0}", 3.4567); // "03.5"
String.Format("{0:00.0}", -3.4567); // "-03.5"
Thousands separator
String.Format("{0:0,0.0}", 12345.67); // "12,345.7"
String.Format("{0:0,0}", 12345.67); // "12,346"
Zero
Following code shows how can be formatted a zero (of double type).
String.Format("{0:0.0}", 0.0); // "0.0"
String.Format("{0:0.#}", 0.0); // "0"
String.Format("{0:#.0}", 0.0); // ".0"
String.Format("{0:#.#}", 0.0); // ""
Align numbers with spaces
String.Format("{0,10:0.0}", 123.4567); // " 123.5"
String.Format("{0,-10:0.0}", 123.4567); // "123.5 "
String.Format("{0,10:0.0}", -123.4567); // " -123.5"
String.Format("{0,-10:0.0}", -123.4567); // "-123.5 "
Custom formatting for negative numbers and zero
String.Format("{0:0.00;minus 0.00;zero}", 123.4567); // "123.46"
String.Format("{0:0.00;minus 0.00;zero}", -123.4567); // "minus 123.46"
String.Format("{0:0.00;minus 0.00;zero}", 0.0); // "zero"
Some funny examples
String.Format("{0:my number is 0.0}", 12.3); // "my number is 12.3"
String.Format("{0:0aaa.bbb0}", 12.3);
Take a look at this MSDN reference. In the notes it states that the numbers are rounded to the number of decimal places requested.
If instead you use "{0:R}" it will produce what's referred to as a "round-trip" value, take a look at this MSDN reference for more info, here's my code and the output:
double d = 10 * 0.69;
Console.WriteLine(" {0:R}", d);
Console.WriteLine("+ {0:F20}", 6.9 - d);
Console.WriteLine("= {0:F20}", 6.9);
output
6.8999999999999995
+ 0.00000000000000088818
= 6.90000000000000000000
Though this question is meanwhile closed, I believe it is worth mentioning how this atrocity came into existence. In a way, you may blame the C# spec, which states that a double must have a precision of 15 or 16 digits (the result of IEEE-754). A bit further on (section 4.1.6) it's stated that implementations are allowed to use higher precision. Mind you: higher, not lower. They are even allowed to deviate from IEEE-754: expressions of the type x * y / z
where x * y
would yield +/-INF
but would be in a valid range after dividing, do not have to result in an error. This feature makes it easier for compilers to use higher precision in architectures where that'd yield better performance.
But I promised a "reason". Here's a quote (you requested a resource in one of your recent comments) from the Shared Source CLI, in clr/src/vm/comnumber.cpp
:
"In order to give numbers that are both friendly to display and round-trippable, we parse the number using 15 digits and then determine if it round trips to the same value. If it does, we convert that NUMBER to a string, otherwise we reparse using 17 digits and display that."
In other words: MS's CLI Development Team decided to be both round-trippable and show pretty values that aren't such a pain to read. Good or bad? I'd wish for an opt-in or opt-out.
The trick it does to find out this round-trippability of any given number? Conversion to a generic NUMBER structure (which has separate fields for the properties of a double) and back, and then compare whether the result is different. If it is different, the exact value is used (as in your middle value with 6.9 - i
) if it is the same, the "pretty value" is used.
As you already remarked in a comment to Andyp, 6.90...00
is bitwise equal to 6.89...9467
. And now you know why 0.0...8818
is used: it is bitwise different from 0.0
.
This 15 digits barrier is hard-coded and can only be changed by recompiling the CLI, by using Mono or by calling Microsoft and convincing them to add an option to print full "precision" (it is not really precision, but by the lack of a better word). It's probably easier to just calculate the 52 bits precision yourself or use the library mentioned earlier.
EDIT: if you like to experiment yourself with IEE-754 floating points, consider this online tool, which shows you all relevant parts of a floating point.
Use
Console.WriteLine(String.Format(" {0:G17}", i));
That will give you all the 17 digits it have. By default, a Double value contains 15 decimal digits of precision, although a maximum of 17 digits is maintained internally. {0:R} will not always give you 17 digits, it will give 15 if the number can be represented with that precision.
which returns 15 digits if the number can be represented with that precision or 17 digits if the number can only be represented with maximum precision. There isn't any thing you can to do to make the the double return more digits that is the way it's implemented. If you don't like it do a new double class yourself...
.NET's double cant store any more digits than 17 so you cant see 6.89999999999999946709 in the debugger you would see 6.8999999999999995. Please provide an image to prove us wrong.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With