Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why is System.Math and for example MathNet.Numerics based on double?

All the methods in System.Math takes double as parameters and returns parameters. The constants are also of type double. I checked out MathNet.Numerics, and the same seems to be the case there.

Why is this? Especially for constants. Isn't decimal supposed to be more exact? Wouldn't that often be kind of useful when doing calculations?

like image 301
Svish Avatar asked Nov 05 '09 14:11

Svish


1 Answers

This is a classic speed-versus-accuracy trade off.

However, keep in mind that for PI, for example, the most digits you will ever need is 41.

The largest number of digits of pi that you will ever need is 41. To compute the circumference of the universe with an error less than the diameter of a proton, you need 41 digits of pi †. It seems safe to conclude that 41 digits is sufficient accuracy in pi for any circle measurement problem you're likely to encounter. Thus, in the over one trillion digits of pi computed in 2002, all digits beyond the 41st have no practical value.

In addition, decimal and double have a slightly different internal storage structure. Decimals are designed to store base 10 data, where as doubles (and floats), are made to hold binary data. On a binary machine (like every computer in existence) a double will have fewer wasted bits when storing any number within its range.

Also consider:

System.Double      8 bytes    Approximately ±5.0e-324 to ±1.7e308 with 15 or 16 significant figures
System.Decimal    12 bytes    Approximately ±1.0e-28 to ±7.9e28 with 28 or 29 significant figures

As you can see, decimal has a smaller range, but a higher precision.

like image 144
John Gietzen Avatar answered Oct 13 '22 02:10

John Gietzen