Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

C# decimal multiplication strange behavior

I noticed a strange behavior when multiplying decimal values in C#. Consider the following multiplication operations:

1.1111111111111111111111111111m * 1m = 1.1111111111111111111111111111 // OK
1.1111111111111111111111111111m * 2m = 2.2222222222222222222222222222 // OK
1.1111111111111111111111111111m * 3m = 3.3333333333333333333333333333 // OK
1.1111111111111111111111111111m * 4m = 4.4444444444444444444444444444 // OK
1.1111111111111111111111111111m * 5m = 5.5555555555555555555555555555 // OK
1.1111111111111111111111111111m * 6m = 6.6666666666666666666666666666 // OK
1.1111111111111111111111111111m * 7m = 7.7777777777777777777777777777 // OK
1.1111111111111111111111111111m * 8m = 8.888888888888888888888888889  // Why not 8.8888888888888888888888888888 ?
1.1111111111111111111111111111m * 9m = 10.000000000000000000000000000 // Why not 9.9999999999999999999999999999 ?

What I cannot understand is the last two of above cases. How is that possible?

like image 338
user1126360 Avatar asked Aug 01 '13 10:08

user1126360


People also ask

What do you mean by C?

" " C is a computer programming language. That means that you can use C to create lists of instructions for a computer to follow. C is one of thousands of programming languages currently in use.

What is the full name of C?

In the real sense it has no meaning or full form. It was developed by Dennis Ritchie and Ken Thompson at AT&T bell Lab. First, they used to call it as B language then later they made some improvement into it and renamed it as C and its superscript as C++ which was invented by Dr.

What is C in coding language?

C is a powerful general-purpose programming language. It can be used to develop software like operating systems, databases, compilers, and so on. C programming is an excellent language to learn to program for beginners. Our C tutorials will guide you to learn C programming one step at a time.

Is C programming hard?

C is more difficult to learn than JavaScript, but it's a valuable skill to have because most programming languages are actually implemented in C. This is because C is a “machine-level” language. So learning it will teach you how a computer works and will actually make learning new languages in the future easier.


2 Answers

decimal stores 28 or 29 significant digits (96 bits). Basically the mantissa is in the range -/+ 79,228,162,514,264,337,593,543,950,335.

That means up to about 7.9.... you can get 29 significant digits accurately - but above that you can't. That's why both the 8 and the 9 go wrong, but not the earlier values. You should only rely on 28 significant digits in general, to avoid odd situations like this.

Once you reduce your original input to 28 significant figures, you'll get the output you expect:

using System;

class Test
{
    static void Main()
    {
        var input = 1.111111111111111111111111111m;
        for (int i = 1; i < 10; i++)
        {
            decimal output = input * (decimal) i;
            Console.WriteLine(output);
        }
    }
}
like image 122
Jon Skeet Avatar answered Oct 19 '22 00:10

Jon Skeet


Mathematicians distinguish between rational numbers and the superset real numbers. Arithmetic operations on rational numbers is well defined and precise. Arithmetic (using the operators of addition, subtraction, multiplication, and division) on real numbers is "precise" only to the extent that either the irrational numbers are left in an irrational form (symbolic) or possibly convertible in some expressions to a rational number. Example, the square root of two has no decimal (or any other rational base) representation. However, the square root of two multiplied by the square root of two is rational - 2, obviously.

Computers, and the languages running on them, generally implement only rational numbers - hidden behind names such as int, long int, float, double precision, real (FORTRAN) or some other name that suggests real numbers. But the rational numbers included are limited, unlike the set of rational numbers whose range is infinite.

Trivial example - not found on computers. 1/2 * 1/2 = 1/4 That works fine if you have a class of Rational numbers AND the size of the numerators and denominators do not exceed the limits of integer arithmetic. so (1,2) * (1,2) -> (1,4)

But if the rational numbers available were decimal AND limited to a single digit after the decimal - impractical - but representative of the choice made when choosing an implementation for approximating rational (float/real, etc.) numbers, then 1/2 would be perfectly convertible to 0.5, then 0.5 + 0.5 would equal 1.0, but 0.5 * 0.5 would have to be either 0.2 or 0.3!

like image 22
Fred Mitchell Avatar answered Oct 19 '22 00:10

Fred Mitchell