Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What's the benefit of accepting floating point inaccuracy in c#

I've had this problem on my mind the last few days, and I'm struggling to phrase my question. However, I think I've nailed what I want to know.

Why does c# accept the inaccuracy by using floating points to store data? And what's the benefit of using it over other methods?

For example, Math.Pow(Math.Sqrt(2),2) is not exact in c#. There are programming languages that can calculate it exactly (for example, Mathematica).

One argument I could think of is that calculating it exactly is a lot slower then just coping with the inaccuracy, but Mathematica & Matlab are used to calculate gigantic scientific problems, so I find it hard to believe those languages are really significantly slower than c#.

So why is it then?

PS: I'm sorry for spamming you with these questions, you've all been really helpful

like image 419
Timo Willemsen Avatar asked Jan 05 '11 12:01

Timo Willemsen


2 Answers

Why does c# accept the inaccuracy by using floating points to store data?

"C#" doesn't accept the tradeoff of performance over accuracy; users do, or do not, accept that.

C# has three floating point types - float, double and decimal - because those three types meet the vast majority of the needs of real-world programmers.

float and double are good for "scientific" calculations where an answer that is correct to three or four decimal places is always close enough, because that's the precision that the original measurement came in with. Suppose you divide 10.00 by 3 and get 3.333333333333. Since the original measurement was probably accurate to only 0.01, the fact that the computed result is off by less than 0.0000000000004 is irrelevant. In scientific calculations, you're not representing known-to-be-exact quantities. Imprecision in the fifteenth decimal place is irrelevant if the original measurement value was only precise to the second decimal place.

This is of course not true of financial calculations. The operands to a financial calculation are usually precise to two decimal places and represent exact quantities. Decimal is good for "financial" calculations because decimal operation results are exact provided that all of the inputs and outputs can be represented exactly as decimals (and they are all in a reasonable range). Decimals still have rounding errors, of course, but the operations which are exact are precisely those that you are likely to want to be exact when doing financial calculations.

And what's the benefit of using it over other methods?

You should state what other methods you'd like to compare against. There are a great many different techniques for performing calculations on computers.

For example, Math.Pow(Math.Sqrt(2),2) is not exact in c#. There are programming languages that can calculate it exactly (for example, Mathematica).

Let's be clear on this point; Mathematica does not "calculate" root 2 exactly; the number is irrational, so it cannot be calculated exactly in any finite amount of storage. Instead, what mathematica does is it represents numbers as objects that describe how the number was produced. If you say "give me the square root of two", then Mathematica essentially allocates an object that means "the application of the square root operator to the exact number 2". If you then square that, it has special purpose logic that says "if you square something that was the square root of something else, give back the original value". Mathematica has objects that represent various special numbers as well, like pi or e, and a huge body of rules for how various manipulations of those numbers combine together.

Basically, it is a symbolic system; it manipulates numbers the same way people do when they do pencil-and-paper math. Most computer programs manipulate numbers like a calculator: perform the calculation immediately and round it off. If that is not acceptable then you should stick to a symbolic system.

One argument I could think of is that calculating it exactly is a lot slower then just coping with the inaccuracy, but Mathematica & Matlab are used to calculate gigantic scientific problems, so I find it hard to believe those languages are really significantly slower than c#.

It's not that they're slower, though multiplication of floating points really is incredibly fast on modern hardware. It's that the symbolic calculation engine is immensely complex. It encodes all the rules of basic mathematics, and there are a lot of those rules! C# is not intended to be a professional-grade symbolic computation engine, it's intended to be a general-purpose programming language.

like image 160
Eric Lippert Avatar answered Oct 05 '22 12:10

Eric Lippert


One word: performance. Floating point arithmetic is typically implemented on hardware and is many orders of magnitude faster than other approaches.

What's more your example of MATLAB is bogus. MATLAB uses double precision floating point arithmetic just like C#.

like image 25
David Heffernan Avatar answered Oct 05 '22 11:10

David Heffernan