Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is there any practical difference between the .net decimal values 1m and 1.0000m?

Tags:

c#

.net

decimal

Is there any practical difference between the .net decimal values 1m and 1.0000m?

The internal storage is different:

1m      : 0x00000001 0x00000000 0x00000000 0x00000000
1.0000m : 0x000186a0 0x00000000 0x00000000 0x00050000

But, is there a situation where the knowledge of "significant digits" would be used by a method in the BCL?

I ask because I'm working on a means of compressing the space required for decimal values for disk storage or network transport and am toying with the idea of "normalizing" the value before I store it to improve it's compressability. But, I'd like to know if it is likely to cause issues down the line. I'm guessing that it should be fine, but only because I don't see any methods or properties that expose the precision of the value. Does anyone know otherwise?

like image 700
MarkPflug Avatar asked Apr 22 '11 19:04

MarkPflug


People also ask

How many decimal places can a decimal hold C#?

C# decimal precision The decimal type is a 128-bit floating point data type; it can have up to 28-29 significant digits.

What is the difference between decimal and decimal in C#?

They are the same. The type decimal is an alias for System. Decimal. So basically decimal is the same thing as Decimal.

Should I use decimal or double C#?

Use double for non-integer math where the most precise answer isn't necessary. Use decimal for non-integer math where precision is needed (e.g. money and currency). Use int by default for any integer-based operations that can use that type, as it will be more performant than short or long .


2 Answers

The reason for the difference in encoding is because the Decimal data type stores the number as a whole number (96 bit integer), with a scale which is used to form the divisor to get the fractional number. The value is essentially

integer / 10^scale

Internally the Decimal type is represented as 4 Int32, see the documentation of Decimal.GetBits for more detail. In summary, GetBits returns an array of 4 Int32s, where each element represents the follow portion of the Decimal encoding

Element 0,1,2 - Represent the low, middle and high 32 bits on the 96 bit integer
Element 3     - Bits 0-15 Unused
                Bits 16-23 exponent which is the power of 10 to divide the integer by
                Bits 24-30 Unused 
                Bit 31 the sign where 0 is positive and 1 is negative

So in your example, very simply put when 1.0000m is encoded as a decimal the actual representation is 10000 / 10^4 while 1m is represented as 1 / 10^0 mathematically the same value just encoded differently.

If you use the native .NET operators for the decimal type and do not manipulate/compare the bit/bytes yourself you should be safe.

You will also notice that the string conversions will also take this binary representation into consideration and produce different strings so you need to be careful in that case if you ever rely on the string representation.

like image 133
Chris Taylor Avatar answered Sep 20 '22 15:09

Chris Taylor


The decimal type tracks scale because it's important in arithmetic. If you do long multiplication, by hand, of two numbers — for instance, 3.14 * 5.00 — the result has 6 digits of precision and a scale of 4.

To do the multiplication, ignore the decimal points (for now) and treat the two numbers as integers.

  3.14
* 5.00
------
  0000 -- 0 * 314 (0 in the one's place)
 00000 -- 0 * 314 (0 in the 10's place)
157000 -- 5 * 314 (5 in the 100's place)
------
157000

That gives you the unscaled results. Now, count the total number of digits to the right of the decimal point in the expression (that would be 4) and insert the decimal point 4 places to the left:

15.7000

That result, while equivalent in value to 15.7, is more precise than the value 15.7. The value 15.7000 has 6 digits of precision and a scale of 4; 15.7 has 3 digits of precision and a scale of 1.

If one is trying to do precision arithmetic, it is important to track the precision and scale of your values and results as it tells you something about the precision of your results (note that precision isnt' the same as accuracy: measure something with a ruler graduated in 1/10ths of an inch and the best you can say about the resulting measurement, no matter how many trailing zeros you put to the right of the decimal point is that it is accurate to, at best, a 1/10th of an inch. Another way of putting it would be to say that your measurement is accurate, at best, within +/- 5/100ths of the stated value.

like image 24
Nicholas Carey Avatar answered Sep 18 '22 15:09

Nicholas Carey