Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why the most significant bit for a normalized number is always 1?

From Significand entry in Wikipedia :

When working in binary, the significand is characterized by its width in binary digits (bits). Because the most significant bit is always 1 for a normalized number, this bit is not typically stored and is called the "hidden bit". Depending on the context, the hidden bit may or may not be counted towards the width of the significand. For example, the same IEEE 754 double precision format is commonly described as having either a 53-bit significand, including the hidden bit, or a 52-bit significand, not including the hidden bit. The notion of a hidden bit only applies to binary representations. IEEE 754 defines the precision, p, to be the number of digits in the significand, including any implicit leading bit (e.g. precision, p, of double precision format is 53).

Why the most significant bit is always 1 for a normalized number ? Can some one please explain with an example ?

like image 260
Geek Avatar asked Aug 20 '12 13:08

Geek


People also ask

Is the most significant bit always 1?

Techopedia Explains Most Significant Bit (MSB) Since it is binary, the most significant bit can be either 1 or 0.

Is a hidden bit always 1?

Many floating point representations have an implicit hidden bit in the mantissa. This is a bit which is present virtually in the mantissa, but not stored in memory because its value is always 1 in a normalized number.

What is the most significant bit of a number?

The most significant bit (MSB) is the bit in a multiple-bit binary number with the largest value. This is usually the bit farthest to the left, or the bit transmitted first in a sequence. For example, in the binary number 1000, the MSB is 1, and in the binary number 0111, the MSB is 0.

Why do we consider the leftmost bit to be the most significant bit?

In this case, we referred to the first, or left-most bit as the Most Significant Bit (msb for short). The msb is the bit in a binary sequence that carries the greatest numerical value. For simpler reference, if we take a look at the equivalent decimal number, 231, the most significant digit is the leading 2.


2 Answers

The only digits in binary are 0 and 1, and leading zeros are not significant (e.g. 1101 is the same as 00001101, the extra zeros add nothing). Therefore the most significant digit must be a 1, as that is the only other possibility.

Since you know the most significant bit must be 1, you can omit it without any loss of information. This is useful as it allows you to get an extra bit of precision.

The exception to this is the number 0, but this is handled in floating point (along with other special cases like NaN and +/- infinity) by setting the biased exponent to 0.

like image 162
verdesmarald Avatar answered Sep 29 '22 09:09

verdesmarald


This is not entirely true, there is no leading 1 for special values like 0, NaN, Infinity and denormal values.

But yes, when it is a "sane" number then you can always rewrite it so that the mantissa starts with 1. Simply by incrementing the exponent. The equivalent exists in decimal as well, you can always write a number so that it starts with a single non-zero digit if the value is not zero:

  0.5 = 5.0 E-01
  0.05 = 5.0 E-02
  50 = 5.0 E+01
  500 = 5.0 E+02
  etcetera.

With the distinction that this single digit will be 1 in binary. So it doesn't have to be stored, in can be inferred. Which buys one extra bit of precision. Attractive enough to complicate the floating point processor logic to restore that 1 in calculations.

like image 44
Hans Passant Avatar answered Sep 29 '22 09:09

Hans Passant