Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why doesn't the decimal class use the remaining 26 bits?

Tags:

c#

decimal

The decimal class uses 96 bits for the integral part, 1 bit for the sign, and 5 bits for the scaling factor. 26 bits are unused, and the max value is 7.9e28 because the maximum exponent is 28.

Using the other 26 bits, the precision would be higher. What's the reason for this implementation choice?

like image 850
Ramy Al Zuhouri Avatar asked Jun 21 '14 21:06

Ramy Al Zuhouri


1 Answers

You might find this article useful:

http://csharpindepth.com/articles/general/decimal.aspx

128 is 4 x 32. Most CPU's have 32 (or 64) bit registers and ALUs, so anything that is divisible by 32 will be much easier to manipulate and store etc.

like image 102
MikeS159 Avatar answered Nov 07 '22 12:11

MikeS159