Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How are fractions represented in computers?

Since computers think in terms of "1" and "0" how do they calculate and represent fractions such as 7.50 ? I know Java and JavaScript and if required for the answer you can use them as an example .

Edit: I was watching this MIT video on hashing by Prof. Cormen at 46:31 seconds he explains the multiplication hash function using a modular wheel which is a unit circle with several points in it and the points denote fractions. This prompted me to ask this basic question here in SO .

like image 591
Geek Avatar asked Aug 17 '12 13:08

Geek


People also ask

How are fraction represented?

A fraction represents the number of parts that we have of a whole that is divided into equal parts. Fractions are represented by two numbers that are separated by a fraction line.

How are numbers represented in computer?

Computers use binary (base 2) number system, as they are made from binary digital components (known as transistors) operating in two states - on and off. In computing, we also use hexadecimal (base 16) or octal (base 8) number systems, as a compact form for representing binary numbers.

Can fractions be represented in bits?

Similar to decimal fractions, binary numbers can also be represented as unsigned fractional numbers by placing the binary digits to the right of the decimal point or in this case, binary point.

How are fractions represented in binary?

To convert fraction to binary, start with the fraction in question and multiply it by 2 keeping notice of the resulting integer and fractional part. Continue multiplying by 2 until you get a resulting fractional part equal to zero. Then just write out the integer parts from the results of each multiplication.


1 Answers

The most common way to represent numbers other than integers on computers is by using floating point, particularly IEEE 754 floating point. As you may be familiar with, integers are commonly represented by using hardware bits to represent binary numerals, so physical properties (such as charge or lack of charge, high voltage or low voltage, a magnetic field in one direction or another) are used to represent bits (0 and 1), and a sequence of those bits makes a numeral (such as 11010), which we interpret in binary to represent a number (110102 is 16+8+2 = 26). We do not usually think of it, but there is a “radix point” to the right of this numeral: “11010.” We only need the radix point when we have more bits to the right of it, which represent fractions. For example, 11010.112 is 16 + 8 + 2 + 1/2 + 1/4 = 26.75. To change from integers to floating point, we make the radix point float. In addition to the bits representing the numeral, we have some additional bits that tell us where to put the radix point.

So, we might have three bits, say 010, to say where the radix point goes and other bits, say 1101011, to represent the value. The radix-point bits, 010, might say to move the radix point two positions left, changing “1101011.” to “11010.11”.

In single-precision IEEE 754, there is one sign bit (that tells us + or -), eight exponent bits, and 23 value bits (for the “significand” or “fraction”). The values 0 and 255 of the exponent bits are special. For other values of the exponent bits, we subtract 127 to get exponents ranging from -126 (shift the radix point 126 bits left) to 127 (shift the radix point 127 bits right). The significand bits are interpreted as a binary numeral, except that we modify them a little: We write “1”, then a radix point, then the 23 bits of the significand, so we have something like “1.1101011000…”. As an alternative, you can think of this as an integer: “1” then 23 bits with no inserted radix point, making a 24-bit binary numeral, but the exponent is adjusted by an extra 23 (so subtract 150 instead of 127).

In double-precision IEEE 754, there is one sign bit, 11 exponent bits, and 52 significand bits.

There are other floating-point formats, which are less common. Some older ones use hexadecimal as a base (using the exponent to indicate shifts of four bits instead of one). An important type of floating-point format is decimal, where the exponent indicates powers of 10. In decimal floating point, the significand can be a binary integer or it can be a binary-coded decimal number (where each four bits indicates a decimal digit) or it can be a hybrid (groups of bits are used to indicate a small number of decimal digits according to a customized scheme).

An important property of floating-point numbers is that they cannot represent all real numbers (even in a finite range, of course) or even all rational numbers. This compels mathematical operations to return results rounded to representable numbers, which causes no end of problems for people unfamiliar with working with floating point. This property in turn becomes a feature of decimal floating point: It is good for working with currency denominations and other human-associated numbers that are usually manipulated in decimal, because most rounding errors can be eliminated by careful use of decimal floating point. Scientists and mathematicians, who work more with nature-associated or pure numbers instead of human-contaminated numbers, tend to prefer binary floating point, because it is more widely available and is well supported by hardware.

There are other ways to represent non-integer numbers in computers. Another common method is fixed point. In fixed point, a sequence of bits, such as 1101011, is interpreted with a radix point at a known, fixed position. The position would be fixed at a position useful for a specific application. So the bits 1101011 could stand for the number 11010.112. An advantage of fixed point is that it is easily implemented with standard hardware. To add two fixed-point numbers, we simply add them as if they were integers. To multiply two fixed-point numbers, we multiply them as if they were integers, but the result has twice as many positions after the radix point, so we either shift the bits to adjust for this or we write our code so that the results of such operations are interpreted with the known number of bits after the radix point. Some processors have instructions to support fixed point by adjusting multiplications for this effect.

Numbers can also be scaled to integers. For example, to work with United States currency, we simply multiply dollar amounts by 100 and do all arithmetic with integers. The radix point is only inserted when displaying final results (and is interpreted when reading data from humans). Another common scaling is to represent pixel intensities (from 0 to 1) by multiplying by 255, so that fractions from 0 to 1 fit into an eight-bit byte.

There is also software to provide extended precision (use several units of the basic arithmetic type to provide additional precision) or arbitrary precision (use a dynamic number of units to provide as much precision as desired). Such software is very slow compared to hardware-supported arithmetic and is typically used only for special purposes. Additionally, extended precision has essentially the same properties as floating point; it is just that the rounding errors are smaller, not gone. Arbitrary precision has the same flaw except that its dynamic precision may allow you to make the error small enough that you can obtain a final result that is within a necessary interval (possibly with proof that you have done so).

Another way to represent non-integers is by using fractions. You can store a numerator and a denominator, and perform arithmetic in much the same way taught in school: Multiply by multiplying numerators and multiplying denominators. Add by converting both fractions to have a common denominator, then add numerators. This sort of arithmetic is problematic because denominators very quickly become large, so you need extended precision or arbitrary precision to manage them.

You can also represent numbers symbolically or with compound expressions. For example, instead of storing the square root of two as a numerical value, you can store it with a data structure that represents the square root operation applied to the number 2. Performing any but the simplest operations with such representations requires very complicated software to manage expressions, combine them, find reductions, and so on. This sort of representation is used in specialized math software, such as Maple and Mathematica.

Finally, you can represent numbers any way you want. Our modern processors are general-purpose computing devices, up to the limits of their speed and storage capacity, so you can write algorithms that represent numbers with strings or data structures or any other technique.

like image 79
Eric Postpischil Avatar answered Nov 12 '22 19:11

Eric Postpischil