Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Mathematically determine the precision and scale of a decimal value

I have been looking at some way to determine the scale and precision of a decimal in C#, which led me to several SO questions, yet none of them seem to have correct answers, or have misleading titles (they really are about SQL server or some other databases, not C#), or any answers at all. The following post, I think, is the closest to what I'm after, but even this seems wrong:

Determine the decimal precision of an input number

First, there seems to be some confusion about the difference between scale and precision. Per Google (per MSDN):

Precision is the number of digits in a number. Scale is the number of digits to the right of the decimal point in a number.

With that being said, the number 12345.67890M would have a scale of 5 and a precision of 10. I have not discovered a single code example that would accurately calculate this in C#.

I want to make two helper methods, decimal.Scale(), and decimal.Precision(), such that the following unit test passes:

[TestMethod]
public void ScaleAndPrecisionTest()
{
    //arrange 
    var number = 12345.67890M;

    //act
    var scale = number.Scale();
    var precision = number.Precision();

    //assert
    Assert.IsTrue(precision == 10);
    Assert.IsTrue(scale == 5);
}

but I have yet to find a snippet that will do this, though several people have suggested using decimal.GetBits(), and others have said, convert it to a string and parse it.

Converting it to a string and parsing it is, in my mind, an awful idea, even disregarding the localization issue with the decimal point. The math behind the GetBits() method, however, is like Greek to me.

Can anyone describe what the calculations would look like for determining scale and precision in a decimal value for C#?

like image 962
Jeremy Holovacs Avatar asked Nov 03 '15 02:11

Jeremy Holovacs


People also ask

What is the precision after decimal point?

The Exactness or Accuracy of real numbers is indicated by the number of digits after the decimal point. So, precision means the number of digits mentioned after the decimal point in the float number. For example, the number 2.449561 has precision six, and -1.058 has precision three.

What does 2 digit precision mean?

In mathematics, precision describes the level of exactness in a number's digits, such as number 54.6 having precision 1 (one decimal digit). A number with end zeroes ("00") has a negative precision, such as 500 having precision -2, or 4,000 as precision -3. A whole number (not ending in "0") has precision 0.

What is precision in rounding?

Precision of a numeric value describes the number of digits that are used to express that value, including digits to both the left and the right of any decimal point. For example 4.520 has a precision of 4. Zuora supports up to 13 digits to the left of the decimal place, and up to 9 digits to the right.


2 Answers

This is how you get the scale using the GetBits() function:

decimal x = 12345.67890M;
int[] bits = decimal.GetBits(x);
byte scale = (byte) ((bits[3] >> 16) & 0x7F); 

And the best way I can think of to get the precision is by removing the fraction point (i.e. use the Decimal Constructor to reconstruct the decimal number without the scale mentioned above) and then use the logarithm:

decimal x = 12345.67890M;
int[] bits = decimal.GetBits(x);
//We will use false for the sign (false =  positive), because we don't care about it.
//We will use 0 for the last argument instead of bits[3] to eliminate the fraction point.
decimal xx = new Decimal(bits[0], bits[1], bits[2], false, 0);
int precision = (int)Math.Floor(Math.Log10((double)xx)) + 1;

Now we can put them into extensions:

public static class Extensions{
    public static int GetScale(this decimal value){
    if(value == 0)
            return 0;
    int[] bits = decimal.GetBits(value);
    return (int) ((bits[3] >> 16) & 0x7F); 
    }

    public static int GetPrecision(this decimal value){
    if(value == 0)
        return 0;
    int[] bits = decimal.GetBits(value);
    //We will use false for the sign (false =  positive), because we don't care about it.
    //We will use 0 for the last argument instead of bits[3] to eliminate the fraction point.
    decimal d = new Decimal(bits[0], bits[1], bits[2], false, 0);
    return (int)Math.Floor(Math.Log10((double)d)) + 1;
    }
}

And here is a fiddle.

like image 60
Racil Hilan Avatar answered Nov 14 '22 21:11

Racil Hilan


First of all, solve the "physical" problem: how you're gonna decide which digits are significant. The fact is, "precision" has no physical meaning unless you know or guess the absolute error.


Now, there are 2 fundamental ways to determine each digit (and thus, their number):

  • get+interpret the meaningful parts
  • calculate mathematically

The 2nd way can't detect trailing zeros in the fractional part (which may or may not be significant depending on your answer to the "physical" problem), so I won't cover it unless requested.

For the first one, in the Decimal's interface, I see 2 basic methods to get the parts: ToString() (a few overloads) and GetBits().

  1. ToString(String, IFormatInfo) is actually a reliable way since you can define the format exactly.

    • E.g. use the F specifier and pass a culture-neutral NumberFormatInfo in which you have manually set all the fields that affect this particular format.
      • regarding the NumberDecimalDigits field: a test shows that it is the minimal number - so set it to 0 (the docs are unclear on this), - and trailing zeros are printed all right if there are any
  2. The semantics of GetBits() result are documented clearly in its MSDN article (so laments like "it's Greek to me" won't do ;) ). Decompiling with ILSpy shows that it's actually a tuple of the object's raw data fields:

    public static int[] GetBits(decimal d)
    {
        return new int[]
        {
            d.lo,
            d.mid,
            d.hi,
            d.flags
        };
    }
    

    And their semantics are:

    • |high|mid|low| - binary digits (96 bits), interpreted as an integer (=aligned to the right)
    • flags:
      • bits 16 to 23 - "the power of 10 to divide the integer number" (=number of fractional decimal digits)
        • (thus (flags>>16)&0xFF is the raw value of this field)
      • bit 31 - sign (doesn't concern us)

    as you can see, this is very similar to IEEE 754 floats.

    So, the number of fractional digits is the exponent value. The number of total digits is the number of digits in the decimal representation of the 96-bit integer.

like image 30
ivan_pozdeev Avatar answered Nov 14 '22 21:11

ivan_pozdeev