Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why can't decimal numbers be represented exactly in binary?

There have been several questions posted to SO about floating-point representation. For example, the decimal number 0.1 doesn't have an exact binary representation, so it's dangerous to use the == operator to compare it to another floating-point number. I understand the principles behind floating-point representation.

What I don't understand is why, from a mathematical perspective, are the numbers to the right of the decimal point any more "special" that the ones to the left?

For example, the number 61.0 has an exact binary representation because the integral portion of any number is always exact. But the number 6.10 is not exact. All I did was move the decimal one place and suddenly I've gone from Exactopia to Inexactville. Mathematically, there should be no intrinsic difference between the two numbers -- they're just numbers.

By contrast, if I move the decimal one place in the other direction to produce the number 610, I'm still in Exactopia. I can keep going in that direction (6100, 610000000, 610000000000000) and they're still exact, exact, exact. But as soon as the decimal crosses some threshold, the numbers are no longer exact.

What's going on?

Edit: to clarify, I want to stay away from discussion about industry-standard representations, such as IEEE, and stick with what I believe is the mathematically "pure" way. In base 10, the positional values are:

... 1000  100   10    1   1/10  1/100 ... 

In binary, they would be:

... 8    4    2    1    1/2  1/4  1/8 ... 

There are also no arbitrary limits placed on these numbers. The positions increase indefinitely to the left and to the right.

like image 720
Barry Brown Avatar asked Jul 06 '09 20:07

Barry Brown


People also ask

Why can 1/10 be represented exactly in decimal but not in binary?

There are some numbers that are able to be exactly represented in some base, but it takes more than some number of digits/bits to do. In binary, the number 1/10 which is conveniently 0.1 in decimal isn't able to be represented as a number that can be represented in a fixed number of bits in binary.

Why computers Cannot use normal decimal numbers as data representation?

Because binary lends itself naturally to a ON-OFF system where ON means a current and OFF means no current. Using 10 different voltage levels would be very error prone. You could use binary numbers to represent decimal digits (this is called BCD).

What numbers Cannot be represented exactly in floating-point?

0.1 In Floating-Point 0.00011 is a finite representation of an infinite number of digits. That doesn't help us with floating-point. Floating-point does not represent numbers using repeat bars; it represents them with a fixed number of bits.


1 Answers

Decimal numbers can be represented exactly, if you have enough space - just not by floating binary point numbers. If you use a floating decimal point type (e.g. System.Decimal in .NET) then plenty of values which can't be represented exactly in binary floating point can be exactly represented.

Let's look at it another way - in base 10 which you're likely to be comfortable with, you can't express 1/3 exactly. It's 0.3333333... (recurring). The reason you can't represent 0.1 as a binary floating point number is for exactly the same reason. You can represent 3, and 9, and 27 exactly - but not 1/3, 1/9 or 1/27.

The problem is that 3 is a prime number which isn't a factor of 10. That's not an issue when you want to multiply a number by 3: you can always multiply by an integer without running into problems. But when you divide by a number which is prime and isn't a factor of your base, you can run into trouble (and will do so if you try to divide 1 by that number).

Although 0.1 is usually used as the simplest example of an exact decimal number which can't be represented exactly in binary floating point, arguably 0.2 is a simpler example as it's 1/5 - and 5 is the prime that causes problems between decimal and binary.


Side note to deal with the problem of finite representations:

Some floating decimal point types have a fixed size like System.Decimal others like java.math.BigDecimal are "arbitrarily large" - but they'll hit a limit at some point, whether it's system memory or the theoretical maximum size of an array. This is an entirely separate point to the main one of this answer, however. Even if you had a genuinely arbitrarily large number of bits to play with, you still couldn't represent decimal 0.1 exactly in a floating binary point representation. Compare that with the other way round: given an arbitrary number of decimal digits, you can exactly represent any number which is exactly representable as a floating binary point.

like image 52
Jon Skeet Avatar answered Oct 07 '22 21:10

Jon Skeet