Suppose that we have a System.Decimal number.
For illustration, let's take one whose ToString() representation is as follows:
d.ToString() = "123.4500"
The following can be said about this Decimal. For our purposes here, scale is defined as the number of digits to the right of the decimal point. Effective scale is similar but ignores any trailing zeros that occur in the fractional part. (In other words, these parameters are defined like SQL decimals plus some additional parameters to account for the System.Decimal concept of trailing zeros in the fractional part.)
Given an arbitrary System.Decimal, how can I compute all four of these parameters efficiently and without converting to a String and examining the String? The solution probably requires Decimal.GetBits.
Some more examples:
Examples Precision Scale EffectivePrecision EffectiveScale 0 1 (?) 0 1 (?) 0 0.0 2 (?) 1 1 (?) 0 12.45 4 2 4 2 12.4500 6 4 4 2 770 3 0 3 0
(?) Alternatively interpreting these precisions as zero would be fine.
To calculate precision using a range of values, start by sorting the data in numerical order so you can determine the highest and lowest measured values. Next, subtract the lowest measured value from the highest measured value, then report that answer as the precision.
Precision is the number of digits in a number. Scale is the number of digits to the right of the decimal point in a number. For example, the number 123.45 has a precision of 5 and a scale of 2. In SQL Server, the default maximum precision of numeric and decimal data types is 38.
precision defines the number of significant digits (limit is 32, default is 16). scale defines the number of digits to the right of the decimal point. When no scale is specified, the data type defines a floating point number. When no ( precision , scale ) is specified, it defaults to DECIMAL(16) .
Yes, you'd need to use Decimal.GetBits
. Unfortunately, you then have to work with a 96-bit integer, and there are no simple integer type in .NET which copes with 96 bits. On the other hand, it's possible that you could use Decimal
itself...
Here's some code which produces the same numbers as your examples. Hope you find it useful :)
using System; public class Test { static public void Main(string[] x) { ShowInfo(123.4500m); ShowInfo(0m); ShowInfo(0.0m); ShowInfo(12.45m); ShowInfo(12.4500m); ShowInfo(770m); } static void ShowInfo(decimal dec) { // We want the integer parts as uint // C# doesn't permit int[] to uint[] conversion, // but .NET does. This is somewhat evil... uint[] bits = (uint[])(object)decimal.GetBits(dec); decimal mantissa = (bits[2] * 4294967296m * 4294967296m) + (bits[1] * 4294967296m) + bits[0]; uint scale = (bits[3] >> 16) & 31; // Precision: number of times we can divide // by 10 before we get to 0 uint precision = 0; if (dec != 0m) { for (decimal tmp = mantissa; tmp >= 1; tmp /= 10) { precision++; } } else { // Handle zero differently. It's odd. precision = scale + 1; } uint trailingZeros = 0; for (decimal tmp = mantissa; tmp % 10m == 0 && trailingZeros < scale; tmp /= 10) { trailingZeros++; } Console.WriteLine("Example: {0}", dec); Console.WriteLine("Precision: {0}", precision); Console.WriteLine("Scale: {0}", scale); Console.WriteLine("EffectivePrecision: {0}", precision - trailingZeros); Console.WriteLine("EffectiveScale: {0}", scale - trailingZeros); Console.WriteLine(); } }
I came across this article when I needed to validate precision and scale before writing a decimal value to a database. I had actually come up with a different way to achieve this using System.Data.SqlTypes.SqlDecimal which turned out to be faster that the other two methods discussed here.
static DecimalInfo SQLInfo(decimal dec) { System.Data.SqlTypes.SqlDecimal x; x = new System.Data.SqlTypes.SqlDecimal(dec); return new DecimalInfo((int)x.Precision, (int)x.Scale, (int)0); }
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With