Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

vb/c# decimal internal format

What is the internal format of a "decimal" value in VB or C#?

I don't know that this matters to anything I'm doing immediately, but it's one of those things that's good to know. Like, knowing how many bits and how negative numbers are stored can mean that when you see a negative number show up where you expected a positive, you can instantly think, "Ah, there was an overflow" rather than being baffled by deep dark mysteries.

like image 472
Jay Avatar asked Mar 11 '13 21:03

Jay


People also ask

What is VB in C?

Visual Basic (also known as VB) is an event driven programming language. This is the third generation of such language and is also an integrated development environment (or IDE). It comes from Microsoft and is used specifically for its programming model –COM.

Is VB and C# the same?

Though C# and VB.NET are syntactically very different, that is where the differences mostly end. Microsoft developed both of these languages to be part of the same . NET Framework development platform. They are both developed, managed, and supported by the same language development team at Microsoft.

Is .NET C# or VB?

C# is commonly pronounced as C-sharp. It is the object-oriented programming language that is run on the . NET framework. This language is developed by Microsoft.

Is VB.NET C based?

VB (Visual Basic.NET) is quite a simple language to understand for it resembles the basic English language. Unlike other languages including C#, it mostly uses words like AND. C#, on the flip side, is a part of the C family and owns the features of Python, Java, and C++.


2 Answers

The answer to your question is provided in full technicolor by the documentation:

The Decimal value type represents decimal numbers ranging from positive 79,228,162,514,264,337,593,543,950,335 to negative 79,228,162,514,264,337,593,543,950,335. The Decimal value type is appropriate for financial calculations requiring large numbers of significant integral and fractional digits and no round-off errors. The Decimal type does not eliminate the need for rounding. Rather, it minimizes errors due to rounding.

A decimal number is a floating-point value that consists of a sign, a numeric value where each digit in the value ranges from 0 to 9, and a scaling factor that indicates the position of a floating decimal point that separates the integral and fractional parts of the numeric value.

The binary representation of a Decimal value consists of a 1-bit sign, a 96-bit integer number, and a scaling factor used to divide the 96-bit integer and specify what portion of it is a decimal fraction. The scaling factor is implicitly the number 10, raised to an exponent ranging from 0 to 28. Therefore, the binary representation of a Decimal value is of the form, ((-296 to 296) / 10(0 to 28)), where -296-1 is equal to MinValue, and 296-1 is equal to MaxValue. For more information about the binary representation of Decimal values and an example, see the Decimal(Int32[]) constructor and the GetBits method.

The scaling factor also preserves any trailing zeroes in a Decimal number. Trailing zeroes do not affect the value of a Decimal number in arithmetic or comparison operations. However, trailing zeroes can be revealed by the ToString method if an appropriate format string is applied.

And the binary representation, as described in the documentation for GetBits:

The binary representation of a Decimal number consists of a 1-bit sign, a 96-bit integer number, and a scaling factor used to divide the integer number and specify what portion of it is a decimal fraction. The scaling factor is implicitly the number 10, raised to an exponent ranging from 0 to 28.

The return value is a four-element array of 32-bit signed integers.

The first, second, and third elements of the returned array contain the low, middle, and high 32 bits of the 96-bit integer number.

The fourth element of the returned array contains the scale factor and sign. It consists of the following parts:

Bits 0 to 15, the lower word, are unused and must be zero.

Bits 16 to 23 must contain an exponent between 0 and 28, which indicates the power of 10 to divide the integer number.

Bits 24 to 30 are unused and must be zero.

Bit 31 contains the sign: 0 mean positive, and 1 means negative.

Note that the bit representation differentiates between negative and positive zero. These values are treated as being equal in all operations.

like image 177
David Heffernan Avatar answered Oct 01 '22 02:10

David Heffernan


Both c# and VB.NET decimal refers to System.Decimal, which is well documented: System.Decimal

decimal (C# Reference)

The decimal keyword denotes a 128-bit data type. Compared to floating-point types, the decimal type has a greater precision and a smaller range, which makes it suitable for financial and monetary calculations. The approximate range and precision for the decimal type are shown in the following table.

Range: ±1.0 × 10−28 to ±7.9 × 1028 Precision: 28-29 significant digits

Decimal Data Type (Visual Basic)

Holds signed 128-bit (16-byte) values representing 96-bit (12-byte) integer numbers scaled by a variable power of 10. The scaling factor specifies the number of digits to the right of the decimal point; it ranges from 0 through 28. With a scale of 0 (no decimal places), the largest possible value is +/-79,228,162,514,264,337,593,543,950,335 (+/-7.9228162514264337593543950335E+28). With 28 decimal places, the largest value is +/-7.9228162514264337593543950335, and the smallest nonzero value is +/-0.0000000000000000000000000001 (+/-1E-28).

like image 43
MarcinJuraszek Avatar answered Oct 01 '22 03:10

MarcinJuraszek