Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why decimal in c# is different from other c# types?

Tags:

c#

I was told that decimal is implemented as user defined type and other c# types like int have specific opcodes devoted to them. What's the reasoning behind this?

like image 232
suhair Avatar asked Dec 04 '08 04:12

suhair


People also ask

What does decimal mean in C?

The integer type variable is normally used to hold the whole number and float type variable to hold the real numbers with fractional parts, for example, 2.449561 or -1.0587. Precision determines the accuracy of the real numbers and is denoted by the dot (.) symbol.

What is the purpose of decimal function?

It is used for converting a text representation of a number in a stated base into a decimal value.

Should I use decimal or float?

Float stores an approximate value and decimal stores an exact value. In summary, exact values like money should use decimal, and approximate values like scientific measurements should use float. When multiplying a non integer and dividing by that same number, decimals lose precision while floats do not.

What datatype is decimal in C?

In C programming float data type is used to store floating-point values. Float in C is used to store decimal and exponential values. It is used to store decimal numbers (numbers with floating point values) with single precision.


2 Answers

decimal isn't alone here; DateTime, TimeSpan, Guid, etc are also custom types. I guess the main reason is that they don't map to CPU primatives. float (IEEE 754), int, etc are pretty ubiquitous here, but decimal is bespoke to .NET.

This only really causes a problem if you want to talk to the operators directly via reflection (since they don't exist for int etc). I can't think of any other scenarios where you'd notice the difference.

(actually, there are still structs etc to represent the others - they are just lacking most of what you might expect to be in them, such as operators)

like image 64
Marc Gravell Avatar answered Sep 26 '22 08:09

Marc Gravell


"What's the reasoning behind this?"

Decimal math is handled in software versus hardware. Currently, many processors don't support native decimal (financial decimal versus float) math. That's changing though with the adoption of IEEE 754R.

See also:

  • Decimal vs Double Speed
  • http://software.intel.com/en-us/blogs/2008/03/06/intel-decimal-floating-point-math-library/
  • http://grouper.ieee.org/groups/754/
like image 36
Corbin March Avatar answered Sep 25 '22 08:09

Corbin March