Four simple SELECT statements:
SELECT 33883.50 * -1;
SELECT 33883.50 / -1.05;
SELECT 33883.50 * -1 / 1.05;
SELECT (33883.50 * -1) / 1.05;
But the results are not as I would expect:
-33883.50
-32270.000000
-32269.96773000
-32270.000000
That third result is the one that seems questionable. I can see what is happening, first SQL Server evaluates this:
SELECT -1 / 1.05;
Getting an answer of:
-0.952380
Then it takes that answer and uses it to perform this calculation:
SELECT 33883.50 * -0.952380;
To get the (wrong) answer of:
-32269.96773000
But why is it doing this?
SQL is not a traditional programming language in which you write a sequence of instructions in a given order of execution. Instead, SQL is a "declarative" language, which means that by writing a SQL query, you declare what data you expect as a result of the query, but you don't indicate how to obtain it.
An ordered index scan scales linearly. An actual sort scans in an extra-linear fashion--more specifically, in an n log n fashion. Therefore, especially when dealing with a very large number of rows, it's good to be able to avoid explicit sorting by pulling the data preordered from an index.
Sorting data is an expensive operation because it entails loading part or all of the data into memory and shifting that data back and forth a couple of times.
In your example
33883.50 * -1 / 1.05
is evaluated as
33883.50 * (-1 / 1.05)
instead of
(33883.50 * -1) / 1.05
which results in a loss in precision.
I played a bit with it. I used SQL Sentry Plan Explorer to see the details of how SQL Server evaluates expressions. For example,
2 * 3 * -4 * 5 * 6
is evaluated as
((2)*(3)) * ( -((4)*(5))*(6))
I'd explain it like this. In T-SQL unary minus is made to be the same priority as subtraction, which is lower than multiplication. Yes,
When two operators in an expression have the same operator precedence level, they are evaluated left to right based on their position in the expression.
, but here we have an expression that mixes operators with different priorities and parser follows these priorities to the letter. Multiplication has to go first, so it evaluates 4 * 5 * 6
at first and then applies unary minus to the result.
Normally (say in C++) unary minus has higher priority (like bitwise NOT) and such expressions are parsed and evaluated as expected. They should have made unary minus/plus same highest priority as bitwise NOT in T-SQL, but they didn't and this is the result. So, it is not a bug, but a bad design decision. It is even documented, though quite obscurely.
When you refer to Oracle - that the same example works differently in Oracle than in SQL Server:
decimal
type.decimal
. In SQL Server "a constant with a decimal point is automatically converted into a numeric data value, using the minimum precision and scale necessary. For example, the constant 12.345 is converted into a numeric value with a precision of 5 and a scale of 3."decimal
may be different in Oracle. Even in SQL Server "the default maximum precision of numeric and decimal data types is 38. In earlier versions of SQL Server, the default maximum is 28."If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With