Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Determine the decimal precision of an input number

We have an interesting problem were we need to determine the decimal precision of a users input (textbox). Essentially we need to know the number of decimal places entered and then return a precision number, this is best illustrated with examples:

4500 entered will yield a result 1
4500.1 entered will yield a result 0.1
4500.00 entered will yield a result 0.01
4500.450 entered will yield a result 0.001

We are thinking to work with the string, finding the decimal separator and then calculating the result. Just wondering if there is an easier solution to this.

like image 619
Quinten Avatar asked Jul 19 '10 14:07

Quinten


People also ask

What is precision in decimal number?

Precision is the number of digits in a number. Scale is the number of digits to the right of the decimal point in a number. For example, the number 123.45 has a precision of 5 and a scale of 2. In SQL Server, the default maximum precision of numeric and decimal data types is 38.

How do you do precision after a decimal?

By using the setprecision function, we can get the desired precise value of a floating-point or a double value by providing the exact number of decimal places. If an argument n is passed to the setprecision() function, then it will give n significant digits of the number without losing any information.

How do you get precision in C#?

var result = Math. Pow(0.1, precision); There is another thing you could try - the Decimal type stores an internal precision value. Therefore you could use Decimal.

How do you input a decimal in HTML?

You can also use a decimal value: for example, a step of 0.3 will allow values such as 0.3, 0.6, 0.9 etc, but not 1 or 2. Now you don't get a validation error. Yay! Also note that if you only want to accept positive numbers, you'll want to add min=”0″.


1 Answers

I think you should just do what you suggested - use the position of the decimal point. Obvious drawback might be that you have to think about internationalization yourself.

var decimalSeparator = NumberFormatInfo.CurrentInfo.CurrencyDecimalSeparator;

var position = input.IndexOf(decimalSeparator);

var precision = (position == -1) ? 0 : input.Length - position - 1;

// This may be quite unprecise.
var result = Math.Pow(0.1, precision);

There is another thing you could try - the Decimal type stores an internal precision value. Therefore you could use Decimal.TryParse() and inspect the returned value. Maybe the parsing algorithm maintains the precision of the input.

Finally I would suggest not to try something using floating point numbers. Just parsing the input will remove any information about trailing zeros. So you have to add an artifical non-zero digit to preserve them or do similar tricks. You might run into precision issues. Finally finding the precision based on a floating point number is not simple, too. I see some ugly math or a loop multiplying with ten every iteration until there is no longer any fractional part. And the loop comes with new precision issues...

UPDATE

Parsing into a decimal works. Se Decimal.GetBits() for details.

var input = "123.4560";

var number = Decimal.Parse(input);

// Will be 4.
var precision = (Decimal.GetBits(number)[3] >> 16) & 0x000000FF;

From here using Math.Pow(0.1, precision) is straight forward.

UPDATE 2

Using decimal.GetBits() will allocate an int[] array. If you want to avoid the allocation you can use the following helper method which uses an explicit layout struct to get the scale directly out of the decimal value:

static int GetScale(decimal d)
{
    return new DecimalScale(d).Scale;
}

[StructLayout(LayoutKind.Explicit)]
struct DecimalScale
{
    public DecimalScale(decimal value)
    {
        this = default;
        this.d = value;
    }

    [FieldOffset(0)]
    decimal d;

    [FieldOffset(0)]
    int flags;

    public int Scale => (flags >> 16) & 0xff;
}
like image 98
Daniel Brückner Avatar answered Oct 10 '22 14:10

Daniel Brückner