Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Arbitrary precision decimals in C#?

Tags:

c#

math

Is there an arbitrary-precision decimal class available for C#? I've seen a couple of arbitrary precision integer classes, but that's not quite the same thing.

like image 808
sassafrass Avatar asked Mar 07 '09 11:03

sassafrass


People also ask

What is an arbitrary-precision integer?

Any number that looks like an integer in a source or data file is stored as an arbitrary-precision integer. The size of the integer is limited only by the available memory.

What is precision in C programming?

Precision determines the accuracy of the real numbers and is denoted by the dot (.) symbol. The Exactness or Accuracy of real numbers is indicated by the number of digits after the decimal point. So, precision means the number of digits mentioned after the decimal point in the float number.

How is arbitrary-precision implemented?

Arbitrary-precision arithmetic in most computer software is implemented by calling an external library that provides data types and subroutines to store numbers with the requested precision and to perform computations.

Can you use decimals in C?

You can use decimal data types to represent large numbers accurately, especially in business and commercial applications for financial calculations. You can pass decimal arguments in function calls and in define macros.


1 Answers

You can use J# library's java.math.BigDecimal class if you have it installed. Just add a reference to vjslib.

/me remembers one of the betas in which we had System.Numeric.BigDecimal, sigh
like image 65
mmx Avatar answered Sep 23 '22 15:09

mmx