Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

When does Double.ToString() return a value in scientific notation?

Tags:

c#

I assume that it has something to do with the number of leading or trailing zeroes, but I can't find anything in msdn that gives me a concrete answer.

At what point does Double.ToString(CultureInfo.InvariantCulture) start to return values in scientific notation?

like image 537
geekchic Avatar asked Oct 19 '12 15:10

geekchic


2 Answers

From the docs for Double.ToString(IFormatProvider):

This instance is formatted with the general numeric format specifier ("G").

From the docs for the General Numeric Format Specifier:

Fixed-point notation is used if the exponent that would result from expressing the number in scientific notation is greater than -5 and less than the precision specifier; otherwise, scientific notation is used. The result contains a decimal point if required, and trailing zeros after the decimal point are omitted. If the precision specifier is present and the number of significant digits in the result exceeds the specified precision, the excess trailing digits are removed by rounding.

However, if the number is a Decimal and the precision specifier is omitted, fixed-point notation is always used and trailing zeros are preserved.

The default precision specifier for Double is documented to be 15.

Although earlier in the table, it's worded slightly differently:

Result: The most compact of either fixed-point or scientific notation.

I haven't worked out whether those two are always equivalent for a Double value...

EDIT: As per Abel's comment:

Also, it is not always the most compact notation. 0.0001 is larger then 1E-04, but the first is output. The MS docs are not complete here.

That fits in with the more detailed description, of course. (As the exponent required is greater than -5 and less than 15.)

like image 160
Jon Skeet Avatar answered Oct 23 '22 03:10

Jon Skeet


From the documentation it follows that the most compact form to represent the number will be chosen.

I.e., when you do not specify a format string, the default is the "G" format string. From the specification of the G format string follows:

Result: The most compact of either fixed-point or scientific notation.

The default for the number of digits is 15 with the specifier. That means that a number that is representable as exactly a certain binary representation (like 0.1 in the example of harriyott) will be displayed as fixed point, unless the exponential notation is more compact.

When there are more digits, it will, by default, display all these digits (up to 15) and choose exponential notation once that is shorter.

Putting this together:

?(1.0/7.0).ToString()
"0,142857142857143"      // 15 digits
?(10000000000.0/7.0).ToString()
"1428571428,57143"       // 15 significant digits, E-notation not shorter
?(100000000000000000.0/7.0).ToString()
"1,42857142857143E+16"   // 15 sign. digits, above range for non-E-notation (15)
?(0.001/7.0).ToString()
"0,000142857142857143"   // non E-notation is shorter
?(0.0001/7.0).ToString()
"1,42857142857143E-05"   // E-notation shorter

And, of interest:

?(1.0/2.0).ToString()  
"0,5"                    // exact representation
?(1.0/5.0).ToString()
"0,2"                    // rounded, zeroes removed
?(1.0/2.0).ToString("G20")
"0,5"                    // exact representation
?(1.0/5.0).ToString("G20")
"0,20000000000000001"    // unrounded

This is to show what happens behind the scene and why 0.2 is written as 0.2, not 0,20000000000000001, which is actually is. By default, 15 significant digits are shown. When there are more digits (and there always are, except for certain special numbers), these are rounded the normal way. After rounding, redundant zeroes are removed.

Note that a double has a precision of 15 or 16 digits, depending on the number. So, by showing 15 digits, what you see is a correctly rounded down number and always a complete representation, and the shortest representation of the double.

like image 33
Abel Avatar answered Oct 23 '22 03:10

Abel