I'm working on SQLServer 2008 with the following table:
id user program seconds
------------------------------------
9999 'user01' 'pr01' 5
Column 'seconds' is defined as a decimal(14,0) and the only row in the table is the one shown.
So I try some queries:
Q1:
select seconds*0.95 from myTable
union all
select seconds from myTable
It gives the result I expect:
4.75
5.00
Q2:
select sum(seconds)*0.95 from myTable
union all
select sum(seconds) from myTable
I was expecting the same result as in Q1 but the result I get is:
5
5
Q3:
select sum(seconds)*0.95 from myTable
union all
select sum(seconds)*1.00 from myTable
In this case I get the results I expect:
4.75
5.00
So, I would like to know why Q2 is not getting the expected results. I have tried the same example with 'seconds' being a decimal(2,0) and works fine so I guess it has something to do with the size of the decimal, but in that case I cannot figure out why Q1 and Q3 work while Q2 does not.
The result of sum(seconds)
is numeric(38,0)
which makes sense as you could be summing any number of numeric(14,0)
rows and this avoids overflowing a smaller type.
Multiplying a numeric(38,0)
by 0.95
(numeric(2,2)
) in the top part gives a result of numeric(38,2)
as per the explanation here.
But the bottom part of the union
is numeric(38,0)
so the result as a whole is cast to numeric(38,0)
which again makes sense as numeric(38,2)
can always cast to numeric(38,0)
without error (albeit with a loss of digits after the decimal) but casting the other way won't work at all for very large numbers.
You can cast the bottom part explicitly to numeric(38,2)
to avoid this.
SELECT sum(seconds) * 0.95 AS S
FROM myTable
UNION ALL
SELECT CAST(sum(seconds) AS NUMERIC(38, 2))
FROM myTable
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With