I am totally up a tree with respect to the way T-SQL
handles type precedence with various numeric
types used in a CASE
expression. Could you please explain it using the following test:
-- Consider the query:
SELECT
CASE
WHEN 1=1 THEN CAST(1.555 AS numeric(16,3))
ELSE CEILING((1+1) * CAST(1 AS Numeric(16,2)) * CAST(1 AS int))
END AS Test
-- This returns 2 (scale = 0)
-- Now, remove the CEILING function:
SELECT
CASE
WHEN 1=1 THEN CAST(1.555 AS numeric(16,3))
ELSE (1+1) * CAST(1 AS Numeric(16,2)) * CAST(1 AS int)
END AS Test
-- and it gives 1.56 (scale = 2)
-- Now replace (1+1) with 2:
SELECT
CASE
WHEN 1=1 THEN CAST(1.555 AS numeric(16,3))
ELSE (2) * CAST(1 AS Numeric(16,2)) * CAST(1 AS int)
END AS Test
-- and it yields 1.555 (scale = 3)
It seems wrong to me because in all the three queries numeric(16,3)
in the 1=1
branch should take precedence over the less precise result of the ELSE
branch.
Here, operators with the highest precedence appear at the top of the table, those with the lowest appear at the bottom. Within an expression, higher precedence operators will be evaluated first. The precedence of operators goes as follows: =, <, >, <=, >=, <>, != , ~=, ^=, IS NULL, LIKE, BETWEEN, IN.
In SQL, numbers are defined as either exact or approximate. The exact numeric data types are SMALLINT , INTEGER , BIGINT , NUMERIC(p,s) , and DECIMAL(p,s) . Exact SQL numeric data type means that the value is stored as a literal representation of the number's value.
The issue is that the second branch of the case has a different datatype in each case.
SELECT CAST(1.555 AS NUMERIC(16, 3)) AS A,
CEILING(( 1 + 1 ) * CAST(1 AS NUMERIC(16, 2)) * CAST(1 AS INT)) AS B,--NUMERIC(38,0)
CAST(1.555 AS NUMERIC(16, 3)) AS C,
( 1 + 1 ) * CAST(1 AS NUMERIC(16, 2)) * CAST(1 AS INT) AS D,--NUMERIC(38,2)
CAST(1.555 AS NUMERIC(16, 3)) AS E,
( 2 ) * CAST(1 AS NUMERIC(16, 2)) * CAST(1 AS INT) AS F --NUMERIC(29,2)
INTO T
The maximum scale for numeric is 38
The first one has an else branch of NUMERIC(38,0)
so that is the return type also. If the else branch evaluated to 99999999999999999999999999999999999999
then nothing else would work.
The second one has an else branch of NUMERIC(38,2)
so again that is the final datatype for similar reasons. In order to preserve three digits for precision it would need to be NUMERIC(38,3)
but then 999999999999999999999999999999999999.99
wouldn't fit.
The third one has an else branch of NUMERIC(29,2)
. This is not up to the maximum 38 scale so there is room to expand a bit and preserve the scale. The final return type is NUMERIC(30,3)
.
This does of course just move the question a bit into why the second branches all evaluate differently.
In the normal course of events the following expressions do both evaluate to int
(check the definition of the created table)
SELECT ( 1 + 1 ) AS A,
( 2 ) AS B
INTO T2
So there is no obvious reason for the differing behaviour between those two. But I suspect the literal 2
is treated as NUMERIC(1,0)
by inspecting the actual value.
Whereas the slightly more complicated 1 + 1
expression gets treated as any unknown integer would be and as NUMERIC(10,0)
.
You then get into the rules detailed on this page for preservation of precision and scale when multiplying and dividing numerics.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With