How is the decimal
type implemented?
Update
Thanks! I'm gonna stick with using a 64-bit long with my own implied scale.
A decimal integer literal contains any of the digits 0 through 9. The first digit cannot be 0. Integer literals beginning with the digit 0 are interpreted as an octal integer literal rather than as a decimal integer literal.
Numbers that contain a decimal point are stored in a data type called REAL. Variables that use the real data type can accept both positive and negative numbers with a decimal place.
Large Integers Long variables can hold numbers from -9,223,372,036,854,775,808 through 9,223,372,036,854,775,807. Operations with Long are slightly slower than with Integer . If you need even larger values, you can use the Decimal Data Type.
Decimal Floating Point article on Wikipedia with specific link to this article about System.Decimal
.
A decimal is stored in 128 bits, even though only 102 are strictly necessary. It is convenient to consider the decimal as three 32-bit integers representing the mantissa, and then one integer representing the sign and exponent. The top bit of the last integer is the sign bit (in the normal way, with the bit being set (1) for negative numbers) and bits 16-23 (the low bits of the high 16-bit word) contain the exponent. The other bits must all be clear (0). This representation is the one given by decimal.GetBits(decimal) which returns an array of 4 ints.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With