Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Binary representation of a .NET Decimal

How does a .NET decimal type get represented in binary in memory?

We all know how floating-point numbers are stored and the thusly the reasons for the inaccuracy thereof, but I can't find any information about decimal except the following:

  1. Apparently more accurate than floating-point numbers
  2. Takes 128 bits of memory
  3. 2^96 + sign range
  4. 28 (sometimes 29?) total significant digits in the number

Is there any way I can figure this out? The computer scientist in me demands the answer and after an hour of attempted research, I cannot find it. It seems like there's either a lot of wasted bits or I'm just picturing this wrong in my head. Can anyone shed some light on this please?

like image 243
Squirrelsama Avatar asked Sep 27 '10 06:09

Squirrelsama


People also ask

What is C# decimal?

In C#, Decimal Struct class is used to represent a decimal floating-point number. The range of decimal numbers is +79,228,162,514,264,337,593,543,950,335 to -79,228,162,514,264,337,593,543,950,335.

How do you code decimals in C#?

C# decimal literal The D/d suffix is used for double values, the F/f for float values and the M/m decimal values. Floating point values without a suffix are double . float n1 = 1.234f; double n2 = 1.234; decimal n3 = 1.234m; Console. WriteLine(n1); Console.

How is C# decimal stored?

How is a decimal stored? A decimal is stored in 128 bits, even though only 102 are strictly necessary. It is convenient to consider the decimal as three 32-bit integers representing the mantissa, and then one integer representing the sign and exponent.

How does .NET store decimal data?

Variables of the Decimal type are stored internally as integers in 16 bytes and are scaled by a power of 10. The scaling power determines the number of decimal digits to the right of the floating point, and it's an integer value from 0 to 28.


1 Answers

Decimal.GetBits for the information you want.

Basically it's a 96 bit integer as the mantissa, plus a sign bit, plus an exponent to say how many decimal places to shift it to the right.

So to represent 3.261 you'd have a mantissa of 3261, a sign bit of 0 (i.e. positive), and an exponent of 3. Note that decimal isn't normalized (deliberately) so you can also represent 3.2610 by using a mantissa of 32610 and an exponent of 4, for example.

I have some more information in my article on decimal floating point.

like image 59
Jon Skeet Avatar answered Nov 03 '22 08:11

Jon Skeet