I'm writing a routine that validates data before inserting it into a database, and one of the steps is to see if numeric values fit the precision and scale of a Numeric(x,y) SQL-Server type.
I have the precision and scale from SQL-Server already, but what's the most efficient way in C# to get the precision and scale of a CLR value, or at least to test if it fits a given constraint?
At the moment, I'm converting the CLR value to a string, then looking for the location of the decimal point with .IndexOf(). Is there a faster way?
Precision is the number of digits in a number. Scale is the number of digits to the right of the decimal point in a number. For example, the number 123.45 has a precision of 5 and a scale of 2. In SQL Server, the default maximum precision of numeric and decimal data types is 38.
Numeric precision refers to the maximum number of digits that are present in the number. ie 1234567.89 has a precision of 9. Numeric scale refers to the maximum number of decimal places. ie 123456.789 has a scale of 3. Thus the maximum allowed value for decimal(5,2) is 999.99.
In TSQL, you can specify two different sizes for float, 24 or 53. This will set the precision to 7 or 15 digits respectively. Show activity on this post. As a general rule, you can't specify the number of digits after the decimal point for a floating-point number.
System.Data.SqlTypes.SqlDecimal.ConvertToPrecScale( new SqlDecimal (1234.56789), 8, 2)
gives 1234.67. it will truncate extra digits after the decimal place, and will throw an error rather than try to truncate digits before the decimal place (i.e. ConvertToPrecScale(12344234, 5,2)
Without triggering an exception, you could use the following method to determine if the value fits the precision and scale constraints.
private static bool IsValid(decimal value, byte precision, byte scale)
{
var sqlDecimal = new SqlDecimal(value);
var actualDigitsToLeftOfDecimal = sqlDecimal.Precision - sqlDecimal.Scale;
var allowedDigitsToLeftOfDecimal = precision - scale;
return
actualDigitsToLeftOfDecimal <= allowedDigitsToLeftOfDecimal &&
sqlDecimal.Scale <= scale;
}
Here's a maths based approach.
private static bool IsValidSqlDecimal(decimal value, int precision, int scale)
{
var minOverflowValue = (decimal)Math.Pow(10, precision - scale) - (decimal)Math.Pow(10, -scale) / 2;
return Math.Abs(value) < minOverflowValue;
}
This takes into account how sql server will do rounding and prevent overflow errors, even if we exceed the precision. For example:
DECLARE @value decimal(10,2)
SET @value = 99999999.99499 -- Works
SET @value = 99999999.995 -- Error
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With