Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why does the IEEE 754 standard use a 127 bias?

When working with the excess representation of integers, I use a bias of 2n-1. However, the IEEE 754 standard instead uses 2n-1 - 1.

The only benefit that I can think of is a bigger positive range. Are there any other reasons as to why that decision was taken?

like image 398
james_dean Avatar asked Jan 18 '12 12:01

james_dean


People also ask

Why there is a bias 127 in exponent in IEEE 754 standard single precision?

The exponent field needs to represent both positive and negative exponents. A bias is added to the actual exponent in order to get the stored exponent. For IEEE single-precision floats, this value is 127. Thus, an exponent of zero means that 127 is stored in the exponent field.

Why is the exponent bias 127?

The bias value in floating point numbers has to do with the negative and positiveness of the exponent part of a floating point number. The bias value of a floating point number is 127, which means that 127 is always added to the exponent part of a floating point number.

Why do we use bias in IEEE 754?

In IEEE 754 floating-point numbers, the exponent is biased in the engineering sense of the word – the value stored is offset from the actual value by the exponent bias, also called a biased exponent.

When encoded in excess 127 format What value does the exponent have?

Note that, since 8-bit binary numbers can range from 0 to 255, exponents in single precision format can range from -126 to +127, that is from 2-126 to 2127 or, approximately, 10-38 to 1038 in size. In “excess 127 form” negative exponents range from 0 to 126, and positive exponents range from 128 to 255.


1 Answers

The reason is both Infinities/NaNs and gradual underflow.

If you use exponents to show both integer (n >= 0) and fractional (n < 0) values you have the problem that you need one exponent for 2^0 = 1. So the remaining range is odd, giving you either the choice of choosing the bigger range for fractions or for integers. For single precision we have 256 values, 255 without the 0 exponent. Now IEEE754 reserved the highest exponent (255) for special values: +- Infinity and NaNs (Not a Number) to indicate failure. So we are back to even numbers again (254 for both sides, integer and fractional) but with a lower bias.

The second reason is gradual underflow. The Standard declares that normally all numbers are normalized, meaning that the exponent indicates the position of the first bit. To increase the number of bits the first bit is normally not set but assumed (hidden bit): The first bit after the exponent bit is the second bit of the number, the first is always a binary 1. If you enforce normalization you encounter the problem that you cannot encode zero and even if you encode zero as special value, the numerical accuracy is hampered. +-Infinity (the highest exponent) makes it clear that something is wrong, but underflow to zero for too small numbers is perfectly normal and therefore easily to overlook as a possible problem. So Kahan, the designer of the standard, decided that denormalized numbers or subnormals should be introduced and they should include 1/MAX_FLOAT.

EDIT: Allan asked why the "numerical accuracy is hampered" if you encode zero as special value. I should better phrase it as "numerical accuracy is still hampered". In fact this was the implementation of the historical DEC VAX floating point format. If the exponent field in the raw bit encoding was 0, it was considered zero. For example I take now the 32 bit format still rampant in GPUs.

X 00000000 XXXXXXXXXXXXXXXXXXXXXXX

In this case, the content of the mantissa field at the right could be completely ignored and was normally filled with zeroes. The sign field at the left side could be valid, distinguishing a normal zero and a "negative zero" (You could get a negative zero by something like -1.0/0.0 or rounding a negative number).

Gradual underflow and subnormals of IEEE 754 in contrast did use the mantissa field. Only

X 00000000 00000000000000000000000

is zero. All other bit combinations are valid and even more practical, you are warned if your result underflows. So whats the point ?

Consider the different numbers

A 0 00000009 10010101111001111111111  
B 0 00000009 10010101111100001010000

They are valid floating point members, very small but still finite. But as you see the first 11 bits are identical. If you now subtract A-B or B-A the first valid bit leaves the lower exponent range, so the result without gradual underflow is....0. So A != B but A-B = 0. Ouch. Countless people have fallen in this trap and it can be assumed that they never recognized it. The same with multiplication or division: You need to add or subtract exponents and if it falls below the lower threshold: 0. And as you know: 0*everything = 0. You could have STXYZ and once one subproduct is 0, the result is 0 even when a completely valid and even huge number is the correct result. It should be said that these anomalities could be never completely avoided due to rounding, but with gradual underflow they became rare. Very rare.

like image 92
Thorsten S. Avatar answered Sep 22 '22 01:09

Thorsten S.