Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

SQL Server SUM and UNION losing decimal precision

This is a question related to both Entity Framework and SQL Server (2016).

I've noticed some strange behaviour in SQL server when summing a decimal column with a lower scale & precision, and unioning that with another decimal column with higher scale & precision. The resulting values all have the lower scale & precision.

This doesn't seem correct, as this page suggests that when UNIONing, the precision will be adjusted to be the widest necessary to accommodate it.

The problem can be easily seen with this example:

create table low_scale (val decimal(13,2))
create table high_scale (val decimal(19,8))

insert into low_scale values (10.00), (2.23)
insert into high_scale values (0.0000002), (2.02302023)

-- Query 1: Works fine - Result is at the widest precision required to accomodate the two types:
select * from low_scale
union all 
select * from high_scale

-- Query 2: Strange - result is rounded to two decimal places:
select sum(val) from low_scale
union all 
select sum(val) from high_scale

As you'd expect, the result from Query 1 is:

10.00000000
2.23000000
0.00000020
2.02302023

However, this is the result from Query 2:

12.23
2.02

It appears I can work around this by first casting the lower-precision column to a higher precision, like so:

-- Result is at the expected 8 decimal places:
select sum(CAST(val as decimal(19,8))) from low_scale
union all 
select sum(val) from high_scale

However, I'm using Entity Framework 6, and don't have much control over what SQL gets generated. Is there any way to force Entity Framework to cast to a higher precision when doing the SUM, or some other way to ensure the correct behavior?

Edit: I'm not sure why this is being flagged as a duplicate of a question about floating points. This has nothing to do with floating points - it is using decimal - a fixed-point data type.

like image 304
Tom Avatar asked Jan 26 '26 16:01

Tom


1 Answers

Expanding on what Martin Smith mentioned in a comment, this is what is happening:

From the SQL server documentation on SUM, the result of SUM on a type of decimal(p, s) is always decimal(38, s).

Thus the input to the first half of the UNION is decimal(38, 2), and the second half decimal(38, 8)

This page states that when UNIONing, the resulting precision is max(s1, s2) + max(p1-s1, p2-s2), and scale is max(s1, s2)

So, putting s1 = 2, s2 = 8, p1, p2 = 38 into that, we get a precision of 44 and scale of 8.

However that same page states The result precision and scale have an absolute maximum of 38. When a result precision is greater than 38, the corresponding scale is reduced to prevent the integral part of a result from being truncated.

So, it reduces the scale by 6 (bringing it down to 2) to bring the total precision back down to 38.

This explains the behaviour, but doesn't offer a solution. That said, there are options:

  1. Issue the two halves of the UNION as separate queries. At this point we'll be dealing with a .NET decimal, and we can .Concat() it without having to worry about the SQL behaviour.
  2. Keep all decimal types at the same precision - but always round appropriately when storing to the DB.
  3. (nasty) - Cast the result of Sum() to a float or double before calling .Concat() or .Union(). Obviously this introduces a bunch of other issues, but in some cases it may be viable.

In my case, I'll likely opt for #2, as there will certainly be other places where these two tables will be summed & unioned, and I wouldn't expect other developers to recognize they will be affected by this SQL server behaviour (as, from EF's point of view, we're dealing with .NET decimals).

like image 88
Tom Avatar answered Jan 28 '26 08:01

Tom



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!