Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

c# SqlDecimal flipping sign in multiplication of small numbers

It seems I'm running into more woes with 'my most favorite datatype' SqlDecimal. I'm wondering if this should be considered a bug or not.

When I multiply two small numbers in SQL I get the expected result. When I run the same numbers through a SQLCLR function the results are, well surprising.

c# code:

using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;

namespace TestMultiplySQLDecimal
{
    public static class Multiplier
    {
        [SqlFunction(DataAccess=DataAccessKind.None, IsDeterministic = true,IsPrecise = true)]
        public static SqlDecimal Multiply(SqlDecimal a, SqlDecimal b)
        {
            if (a.IsNull || b.IsNull) return SqlDecimal.Null;
            return a*b;
        }
    }
}

SQL Code:

USE tempdb
GO
IF DB_ID('test') IS NOT NULL DROP DATABASE test
GO
CREATE DATABASE test
GO
USE test
GO

CREATE ASSEMBLY TestMultiplySQLDecimal 
FROM 'C:\Users\tralalalaa\Documents\visual studio 2015\Projects\TestMultiplySQLDecimal\TestMultiplySQLDecimal\bin\Release\TestMultiplySQLDecimal.dll'
WITH PERMISSION_SET = SAFE
GO

CREATE FUNCTION dbo.fn_multiply(@a decimal(38,8), @b decimal(18,8)) 
RETURNS decimal(38,8)
EXTERNAL NAME TestMultiplySQLDecimal.[TestMultiplySQLDecimal.Multiplier].Multiply
GO
DECLARE @a decimal(38, 8),
        @b decimal(18, 8),
        @c decimal(38, 8),
        @f decimal(38, 8)

SELECT @a = -0.00000450,
       @b = 0.193,
       @c = NULL,
       @f = NULL

SELECT @c = @a * @b,
       @f = dbo.fn_multiply(@a, @b)

SELECT multiply = null, c = @c, f = @f

The outcome of this is : c = -0.00000100 f = +0.00000100

I know the 'absolute' difference is 'minimal' and I've "played down" bigger errors blaming it on "rounding differences"... But it's going to be hard to explain to clients that negative times positive results in positive. And after all, T-SQL supports it fine...

I can try to work around it by using decimal(28,8) instead of decimal(38,8) but I'll run into other (totally unrelated) issues then =/


The following console application exhibits the same problem, without having to get SQL Server/SQLCLR involved:

using System;
using System.Data.SqlTypes;

namespace PlayAreaCSCon
{
    class Program
    {
        static void Main(string[] args)
        {
            var dec1 = new SqlDecimal(-0.00000450d);
            var dec2 = new SqlDecimal(0.193d);
            dec1 = SqlDecimal.ConvertToPrecScale(dec1, 38, 8);
            dec2 = SqlDecimal.ConvertToPrecScale(dec2, 18, 8);
            Console.WriteLine(dec1 * dec2);
            Console.ReadLine();
        }
    }
}

Prints 0.000001

like image 503
deroby Avatar asked Jan 30 '17 14:01

deroby


2 Answers

I believe the bug is located around about line 1550 of SqlDecimal:

ret = new SqlDecimal(rgulRes, (byte)culRes, (byte)ResPrec, 
                     (byte)ActualScale, fResPositive);

if (ret.FZero ())
     ret.SetPositive();

ret.AssertValid();

ret.AdjustScale(lScaleAdjust, true);

return ret;

It first constructs a new decimal using the final scale parameter. It next checks whether the result is "zero", based on the passed in constructor parameters.

Then, after asserting everything is valid, it performs a scale adjustment.

At the time that the FZero check is performed, the result is something like -0.0000008685. And we know that the final scale will be 6 because we're at the limits on resulting scale and precision. Well, the first 6 digits are all zero.

It's only after that, when the scale is being adjusted, that it takes rounding into consideration and moves a 1 into the final decimal position.

It's a bug. Unfortunately, the source code for the SQL Server native implementation of decimal isn't publicly available, so we cannot compare it to the managed implementation of SqlDecimal to see how similar they are and how the original avoids the same bug.

like image 112
Damien_The_Unbeliever Avatar answered Nov 07 '22 06:11

Damien_The_Unbeliever


While the difference in behavior between the T-SQL and .NET implementations is "troubling" and does point to a bug, and while @Damien_The_Unbeliever's fine investigative work might very well identify the cause of this behavior (hard to verify at the moment as there is a lot of code in the SqlDecimal implementation and some of it uses imprecise calculations using double to get around .NET not supporting more than 28 digits), there is quite possibly a larger issue being ignored here: both answers (i.e. c = -0.00000100 f = +0.00000100 ) are wrong! Perhaps we should not be so hasty to determine the winner between "The sky is plaid" and "The sky is polka-dotted" ;-)

This is a case where we probably need to be a bit more pragmatic in the goal, more understanding of the limitations of Decimal operations, and widen our scope of testing.

To begin with, while it might seem like a good idea to reserve the maximum datatype space for an unknown set of inputs, using DECIMAL(38, y) is similar to using NVARCHAR(MAX) for all strings. Yes, it can usually fit anything you throw at it, but there are consequences. And there is an additional consequence in Decimal operations given the nature of how the resulting precision and scale are calculated, especially when giving so little room to the "scale" (i.e. the 8 digits) and yet multiplying very small numbers. Meaning: if you aren't going to be using the full range of 30 digits to the left of the decimal (i.e. DECIMAL(38, 8) ) then don't specify DECIMAL(38, 8). For the input parameters, just specify the max size allowed for each of those values. Given that both are below 0, using something like DECIMAL(20, 18) (or even DECIMAL(18, 8) ) would not only be quite flexible, but would produce the correct result. OR, if you really do need to allow for large values, then give more space to the digits to the right of the decimal (i.e. the "scale") by specifying something like DECIMAL(38, 28) which gives 10 digits to the left of the decimal, and 28 to the right of it.

Original DECIMAL(38, 8) for everything

DECLARE @a DECIMAL(38, 8),
        @b DECIMAL(38, 8),
        @c DECIMAL(38, 8);

SELECT @a = -0.00000450,
       @b = 0.193;

SELECT @c = @a * @b;

SELECT @a * @b AS [RawCalculation], @c AS [c];

Returns:

RawCalculation    c
-0.000001         -0.00000100

Using DECIMAL(18, 8)

DECLARE @a DECIMAL(18, 8),
        @b DECIMAL(18, 8),
        @c DECIMAL(38, 18),
        @d DECIMAL(20, 18),
        @e DECIMAL(38, 8);

SELECT @a = -0.00000450,
       @b = 0.193;

SELECT @c = @a * @b,
       @d = @a * @b,
       @e = @a * @b;

SELECT @a * @b AS [RawCalculation], @c AS [c], @d AS [d], @e AS [e];

Returns:

RawCalculation          c                         d                         e
-0.0000008685000000     -0.000000868500000000     -0.000000868500000000     -0.00000087

Using DECIMAL(38, 28)

DECLARE @a DECIMAL(38, 28),
        @b DECIMAL(38, 28),
        @c DECIMAL(38, 18),
        @d DECIMAL(20, 18),
        @e DECIMAL(38, 8);

SELECT @a = -0.00000450,
       @b = 0.193;

SELECT @c = @a * @b,
       @d = @a * @b,
       @e = @a * @b;

SELECT @a * @b AS [RawCalculation], @c AS [c], @d AS [d], @e AS [e];

Returns:

RawCalculation          c                         d                            e
-0.00000086850000000    -0.000000868500000000     -0.000000868500000000        -0.00000087

.NET sample code

The following code is based on the sample code that @Damien added to the question. I expanded it to do additional tests to show how the variations of precision and scale affect the calculation, and to output the various properties at each step. Please note that the literal representation of a decimal in .NET uses M or m, not d (though it did not make a difference in this test): decimal (C# Reference)

using System;
using System.Data.SqlTypes;

namespace SqlDecimalMultiplication
{
    class Program
    {
        private static void DisplayStuffs(SqlDecimal Dec1, SqlDecimal Dec2)
        {
            Console.WriteLine("1 ~ {0}", Dec1.Value);
            Console.WriteLine("1 ~ Precision: {0}; Scale: {1}; IsPositive: {2}", Dec1.Precision, Dec1.Scale, Dec1.IsPositive);
            Console.WriteLine("2 ~ {0}", Dec2.Value);
            Console.WriteLine("2 ~ Precision: {0}; Scale: {1}; IsPositive: {2}", Dec2.Precision, Dec2.Scale, Dec2.IsPositive);

            Console.Write("\nRESULT:    ");
            Console.ForegroundColor = ConsoleColor.White;
            Console.WriteLine(Dec1 * Dec2);
            Console.ResetColor();

            return;
        }

        static void Main(string[] args)
        {
            var dec1 = new SqlDecimal(-0.00000450m);
            var dec2 = new SqlDecimal(0.193m);

            Console.WriteLine("=======================\n\nINITIAL:");
            DisplayStuffs(dec1, dec2);


            dec1 = SqlDecimal.ConvertToPrecScale(dec1, 38, 8);
            dec2 = SqlDecimal.ConvertToPrecScale(dec2, 18, 8);

            Console.WriteLine("=======================\n\nAFTER (38, 8) & (18, 8):");
            DisplayStuffs(dec1, dec2);


            dec1 = SqlDecimal.ConvertToPrecScale(dec1, 18, 8);

            Console.WriteLine("=======================\n\nAFTER (18, 8) & (18, 8):");
            DisplayStuffs(dec1, dec2);


            dec1 = SqlDecimal.ConvertToPrecScale(dec1, 38, 28);
            dec2 = SqlDecimal.ConvertToPrecScale(dec2, 38, 28);

            Console.WriteLine("=======================\n\nAFTER (38, 28) & (38, 28):");
            DisplayStuffs(dec1, dec2);

            Console.WriteLine("=======================");
            //Console.ReadLine();
        }
    }
}

Returns:

=======================

INITIAL:
1 ~ -0.00000450
1 ~ Precision: 8; Scale: 8; IsPositive: False
2 ~ 0.193
2 ~ Precision: 3; Scale: 3; IsPositive: True

RESULT:    -0.00000086850
=======================

AFTER (38, 8) & (18, 8):
1 ~ -0.00000450
1 ~ Precision: 38; Scale: 8; IsPositive: False
2 ~ 0.19300000
2 ~ Precision: 18; Scale: 8; IsPositive: True

RESULT:    0.000001
=======================

AFTER (18, 8) & (18, 8):
1 ~ -0.00000450
1 ~ Precision: 18; Scale: 8; IsPositive: False
2 ~ 0.19300000
2 ~ Precision: 18; Scale: 8; IsPositive: True

RESULT:    -0.0000008685000000
=======================

AFTER (38, 28) & (38, 28):
1 ~ -0.0000045000000000000000000000
1 ~ Precision: 38; Scale: 28; IsPositive: False
2 ~ 0.1930000000000000000000000000
2 ~ Precision: 38; Scale: 28; IsPositive: True

RESULT:    -0.00000086850000000
=======================

CONCLUSION

While there probably is a bug in SqlDecimal, you should likely not be encountering it if specifying the Precision and Scale of the input parameters "properly". Unless, of course, you truly do have need for 38 digits on incoming values, but most use cases would never need that.

ALSO, the reason for highlighting "input parameters" in the paragraph directly above is to indicate that the return value should naturally be a larger precision (and scale) so as to accommodate the increase in precision and/or scale as a result of certain operations. So there is nothing wrong with keeping DECIMAL(38, 28) or DECIMAL(38,18) as the datatype of the return value.


Related Note:

For SQLCLR UDFs (i.e. scalar functions), do not use this pattern IF it covers all input parameters:

if (a.IsNull || b.IsNull) return SqlDecimal.Null;

IF the idea is to return a NULL if any of the input params is NULL, then you should use the following option in the CREATE FUNCTION statement:

WITH RETURNS NULL ON NULL INPUT

as that will avoid calling the .NET code entirely!

like image 32
Solomon Rutzky Avatar answered Nov 07 '22 06:11

Solomon Rutzky