Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is .NET “decimal” arithmetic independent of platform/architecture?

Tags:

c#

.net

math

I asked about System.Double recently and was told that computations may differ depending on platform/architecture. Unfortunately, I cannot find any information to tell me whether the same applies to System.Decimal.

Am I guaranteed to get exactly the same result for any particular decimal computation independently of platform/architecture?

like image 451
Timwi Avatar asked Feb 16 '11 15:02

Timwi


2 Answers

Am I guaranteed to get exactly the same result for any particular decimal computation independently of platform/architecture?

The C# 4 spec is clear that the value you get will be computed the same on any platform.

As LukeH's answer notes, the ECMA version of the C# 2 spec grants leeway to conforming implementations to provide more precision, so an implementation of C# 2.0 on another platform might provide a higher-precision answer.

For the purposes of this answer I'll just discuss the C# 4.0 specified behaviour.

The C# 4.0 spec says:


The result of an operation on values of type decimal is that which would result from calculating an exact result (preserving scale, as defined for each operator) and then rounding to fit the representation. Results are rounded to the nearest representable value, and, when a result is equally close to two representable values, to the value that has an even number in the least significant digit position [...]. A zero result always has a sign of 0 and a scale of 0.


Since the calculation of the exact value of an operation should be the same on any platform, and the rounding algorithm is well-defined, the resulting value should be the same regardless of platform.

However, note the parenthetical and that last sentence about the zeroes. It might not be clear why that information is necessary.

One of the oddities of the decimal system is that almost every quantity has more than one possible representation. Consider exact value 123.456. A decimal is the combination of a 96 bit integer, a 1 bit sign, and an eight-bit exponent that represents a number from -28 to 28. That means that exact value 123.456 could be represented by decimals 123456 x 10-3 or 1234560 x 10-4 or 12345600 x 10-5. Scale matters.

The C# specification also mandates how information about scale is computed. The literal 123.456m would be encoded as 123456 x 10-3, and 123.4560m would be encoded as 1234560 x 10-4.

Observe the effects of this feature in action:

decimal d1 = 111.111000m;
decimal d2 = 111.111m;
decimal d3 = d1 + d1;
decimal d4 = d2 + d2;
decimal d5 = d1 + d2;
Console.WriteLine(d1);
Console.WriteLine(d2);
Console.WriteLine(d3);
Console.WriteLine(d4);
Console.WriteLine(d5);
Console.WriteLine(d3 == d4);
Console.WriteLine(d4 == d5);
Console.WriteLine(d5 == d3);

This produces

111.111000
111.111
222.222000
222.222
222.222000
True
True
True

Notice how information about significant zero figures is preserved across operations on decimals, and that decimal.ToString knows about that and displays the preserved zeroes if it can. Notice also how decimal equality knows to make comparisons based on exact values, even if those values have different binary and string representations.

The spec I think does not actually say that decimal.ToString() needs to correctly print out values with trailing zeroes based on their scales, but it would be foolish of an implementation to not do so; I would consider that a bug.

I also note that the internal memory format of a decimal in the CLR implementation is 128 bits, subdivided into: 16 unused bits, 8 scale bits, 7 more unused bits, 1 sign bit and 96 mantissa bits. The exact layout of those bits in memory is not defined by the specification, and if another implementation wants to stuff additional information into those 23 unused bits for its own purposes, it can do so. In the CLR implementation the unused bits are supposed to always be zero.

like image 112
Eric Lippert Avatar answered Oct 03 '22 23:10

Eric Lippert


Even though the format of floating point types is clearly defined, floating point calculations can indeed have differing results depending on architecture, as stated in section 4.1.6 of the C# specification:

Floating-point operations may be performed with higher precision than the result type of the operation. For example, some hardware architectures support an “extended” or “long double” floating-point type with greater range and precision than the double type, and implicitly perform all floating-point operations using this higher precision type. Only at excessive cost in performance can such hardware architectures be made to perform floating-point operations with less precision, and rather than require an implementation to forfeit both performance and precision, C# allows a higher precision type to be used for all floating-point operations.

While the decimal type is subject to approximation in order for a value to be represented within its finite range, the range is, by definition, defined to be suitable for financial and monetary calculations. Therefore, it has a higher precision (and smaller range) than float or double. It is also more clearly defined than the other floating point types such that it would appear to be platform-independent (see section 4.1.7 - I suspect this platform independence is more because there isn't standard hardware support for types with the size and precision of decimal rather than because of the type itself, so this may change with future specifications and hardware architectures).

If you need to know if a specific implementation of the decimal type is correct, you should be able to craft some unit tests using the specification that will test the correctness.

like image 44
Jeff Yates Avatar answered Oct 03 '22 21:10

Jeff Yates