Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How does the computer converts binary number into its decimal equivalent in 2's complement

(My question is related to 2's complement only)

Suppose I give you this binary number 11111110 which is stored as two's complement on a machine and I want you to find its decimal equivalent. Some may say it is -2 while some may say it is 254 as they don't know whether its signed or unsigned. (I know it is a signed number so I took its complement and added 1 which gave me 2, so answer is -2. But if I didn't knew the sign, I would have said 254).

In short, how does the computer converts such binary representation which is stored in 2's complement into its decimal equivalent without making mistakes?

Does the computer knows about its sign? (if yes then where is this information stored?)

like image 726
dimSutar Avatar asked Mar 28 '13 08:03

dimSutar


1 Answers

Technically you can not convert a binary represented number into a decimal, because computers do not have any storage facility to represent decimal numbers.

Practically this might sound absurd, since we are always dealing with numbers in decimal representation. But these decimal representations are never actually stored in decimal. Only thing a computer does is converting a number into decimal representation when displaying it. And this conversion is related to program construction and library design.

I'll give a small example on C language. In C you have signed and unsigned integer variables. When you are writing a program these variables are used to store numbers in memory. Who knows about their signs? The compiler. Assembly languages have signed and unsigned operations. Compiler keeps track of the sign of all variables and generate appropriate code for signed and unsigned case. So your program works with signed or unsigned integers perfectly when it is compiled.

Assume you used a printf sentence to print an integer variable and you used %d format converter to print the value in decimal representation. This conversion will be handled by printf function defined in standart input output library of C. The function reads the variable from memory, converts the binary representation to decimal representation by using a simple base conversion algorithm. But the target of the algorithm is a char sequence, not an integer. So this algorithm does two things, it both converts binary to decimal representation; and it converts bits to char values (or ASCII codes to be more precise). printf should know the sign of the number to carry on the conversion successfully and this information is again supplied by the compiler constructs placed at compile time. By using these constructs printf could check whether the integer is signed or unsigned and use the appropriate conversion method.

Other programming languages follow similar paths. In essence numbers are always stored in binary. The signed or unsigned representation is known by compiler/interpreter and thus is a common knowledge. The decimal conversion is only carried on for cosmetic reasons and the target of conversion is a char sequence or a string.

like image 54
infiniteRefactor Avatar answered Nov 15 '22 10:11

infiniteRefactor