Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why does this decimal show 8 decimal places on ToString()?

I have a variable of type decimal whose value is 1.0. I save it into a column in an SQL Server 2012 table, whose type is decimal(10, 8).

After retrieving the value, I see it is 1, but when I call ToString(), the value returned is "1.00000000" (see below).

I realise that the 8 decimal places correspond to the data type in the database. However, there are no attributes or anything in the Entity Framework-generated property that give it this kind of behaviour, so I have no idea how this is happening.

Here are a few tests I've carried out in the Immediate Window:

myDecimal
1
myDecimal.ToString()
"1.00000000"
myDecimal == 1
true
myDecimal == 1.0m
true

As you can see from the last 2 tests, it isn't a case of floating-point error either (not that I was expecting it since decimal is fixed-point, but I had to try it since I ran out of ideas).

Any idea how the decimal's ToString() is resulting in a string with 8 decimal places?

Edit: For comparison, look at the test below.

1m.ToString()
"1"
like image 1000
Gigi Avatar asked Dec 08 '22 05:12

Gigi


1 Answers

The reason is that the decimal type is not normalized. There are multiple representations for the same number, and those will be represented as different strings.

This is not a special property of your database type, this is how decimal works normally. There is no special DataAnotation or anything attached to the variable.

(1m).ToString() == "1"
(1.00000000m).ToString() == "1.00000000"
((1m)==(1.00000000m)) == true

For a given double, there is only one valid representation, i.e. one combination of mantissa * 2exponent

For decimal, there are multiple valid representations of mantissa * 10exponent. Each represents the same number, but the additional information available through the multiple possible representations is used to select the default number of trailing digits when converting the decimal to a string. The exact details are not really well-documented, and I found no information on what exactly happens with the exponent when decimals are added or multiplied. But the effect it has on ToString() is easy to verify.

The drawback is that the Equals() and GetHashCode() operations are more complicated than for a normalized number format, and there have been subtle bugs in the implementation: C# Why can equal decimals produce unequal hash values?

This article by Jon Skeet goes into a bit more detail:

A decimal is stored in 128 bits, even though only 102 are strictly necessary. It is convenient to consider the decimal as three 32-bit integers representing the mantissa, and then one integer representing the sign and exponent. The top bit of the last integer is the sign bit (in the normal way, with the bit being set (1) for negative numbers) and bits 16-23 (the low bits of the high 16-bit word) contain the exponent. The other bits must all be clear (0). This representation is the one given by decimal.GetBits(decimal) which returns an array of 4 ints. [...]

The decimal type doesn't normalize itself - it remembers how many decimal digits it has (by maintaining the exponent where possible) and on formatting, zero may be counted as a significant decimal digit.

You can verify that the two decimals you have are not identical by comparing the values returned by decimal.GetBits(), i.e.:

decimal.GetBits(1m) == {int[4]}
    [0]: 1
    [1]: 0
    [2]: 0
    [3]: 0

decimal.GetBits(1.00000000m) == {int[4]}
    [0]: 100000000
    [1]: 0
    [2]: 0
    [3]: 524288

It may be tempting to rely on this behaviour for formatting your decimals, but I would recommend always explicitly selecting a precision when converting to string, to avoid confusion and unforeseen surprises for example if the number is multiplied by a certain factor beforehand.

like image 115
HugoRune Avatar answered Dec 10 '22 18:12

HugoRune