Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Explicit conversion from Single to Decimal results in different bit representation

If I convert single s into decimal d I've noticed it's bit representation differs from that of the decimal created directly.

For example:

Single s = 0.01f;
Decimal d = 0.01m;

int[] bitsSingle = Decimal.GetBits((decimal)s)
int[] bitsDecimal = Decimal.GetBits(d)

Returns (middle elements removed for brevity):

bitsSingle:
[0] = 10
[3] = 196608

bitsDecimal:
[0] = 1
[3] = 131072

Both of these are decimal numbers, which both (appear) to be accurately representing 0.01:

enter image description here

Looking at the spec sheds no light except perhaps:

§4.1.7 Contrary to the float and double data types, decimal fractional numbers such as 0.1 can be represented exactly in the decimal representation.

Suggesting that this is somehow affected by single not being able accurately represent 0.01 before the conversion, therefore:

  • Why is this not accurate by the time the conversion is done?
  • Why do we seem to have two ways to represent 0.01 in the same datatype?
like image 995
m.edmondson Avatar asked Mar 09 '14 09:03

m.edmondson


1 Answers

TL;DR

Both decimals precisely represent 0.1. It's just that the decimal format, allows multiple bitwise-different values that represent the exact same number.

Explanation

It isn't about single not being able to represent 0.1 precisely. As per the documentation of GetBits:

The binary representation of a Decimal number consists of a 1-bit sign, a 96-bit integer number, and a scaling factor used to divide the integer number and specify what portion of it is a decimal fraction. The scaling factor is implicitly the number 10, raised to an exponent ranging from 0 to 28.

The return value is a four-element array of 32-bit signed integers.

The first, second, and third elements of the returned array contain the low, middle, and high 32 bits of the 96-bit integer number.

The fourth element of the returned array contains the scale factor and sign. It consists of the following parts:

Bits 0 to 15, the lower word, are unused and must be zero.

Bits 16 to 23 must contain an exponent between 0 and 28, which indicates the power of 10 to divide the integer number.

Bits 24 to 30 are unused and must be zero.

Bit 31 contains the sign: 0 mean positive, and 1 means negative.

Note that the bit representation differentiates between negative and positive zero. These values are treated as being equal in all operations.

The fourth integer of each decimal in your example is 0x00030000 for bitsSingle and 0x00020000 for bitsDecimal. In binary this maps to:

bitsSingle     00000000 00000011 00000000 00000000
               |\-----/ \------/ \---------------/
               |   |       |             |
        sign <-+ unused exponent       unused
               |   |       |             |
               |/-----\ /------\ /---------------\
bitsDecimal    00000000 00000010 00000000 00000000

NOTE: exponent represents multiplication by negative power of 10

Therefore, in the first case the 96-bit integer is divided by an additional factor of 10 compared to the second -- bits 16 to 23 give the value 3 instead of 2. But that is offset by the 96-bit integer itself, which in the first case is also 10 times greater than in the second (obvious from the values of the first elements).

The difference in observed values can therefore be attributed simply to the fact that the conversion from single uses subtly different logic to derive the internal representation compared to the "straight" constructor.

like image 157
Jon Avatar answered Nov 14 '22 21:11

Jon