I was told that decimal is implemented as user defined type and other c# types like int have specific opcodes devoted to them. What's the reasoning behind this?
The integer type variable is normally used to hold the whole number and float type variable to hold the real numbers with fractional parts, for example, 2.449561 or -1.0587. Precision determines the accuracy of the real numbers and is denoted by the dot (.) symbol.
It is used for converting a text representation of a number in a stated base into a decimal value.
Float stores an approximate value and decimal stores an exact value. In summary, exact values like money should use decimal, and approximate values like scientific measurements should use float. When multiplying a non integer and dividing by that same number, decimals lose precision while floats do not.
In C programming float data type is used to store floating-point values. Float in C is used to store decimal and exponential values. It is used to store decimal numbers (numbers with floating point values) with single precision.
decimal
isn't alone here; DateTime
, TimeSpan
, Guid
, etc are also custom types. I guess the main reason is that they don't map to CPU primatives. float
(IEEE 754), int
, etc are pretty ubiquitous here, but decimal
is bespoke to .NET.
This only really causes a problem if you want to talk to the operators directly via reflection (since they don't exist for int etc). I can't think of any other scenarios where you'd notice the difference.
(actually, there are still structs etc to represent the others - they are just lacking most of what you might expect to be in them, such as operators)
"What's the reasoning behind this?"
Decimal math is handled in software versus hardware. Currently, many processors don't support native decimal (financial decimal versus float) math. That's changing though with the adoption of IEEE 754R.
See also:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With