The decimal class uses 96 bits for the integral part, 1 bit for the sign, and 5 bits for the scaling factor. 26 bits are unused, and the max value is 7.9e28 because the maximum exponent is 28.
Using the other 26 bits, the precision would be higher. What's the reason for this implementation choice?
You might find this article useful:
http://csharpindepth.com/articles/general/decimal.aspx
128 is 4 x 32. Most CPU's have 32 (or 64) bit registers and ALUs, so anything that is divisible by 32 will be much easier to manipulate and store etc.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With