I have a query doing something like:
SELECT FieldX, FieldY FROM A
WHERE FieldW IN (108, 109, 113, 138, 146, 160,
307, 314, 370, 371, 441, 454 ,457, 458, 479, 480,
485, 488, 490, 492, 519, 523, 525, 534, 539, 543,
546, 547, 550, 564, 573, 629, 642, 643, 649, 650,
651, 694, 698, 699, 761, 762, 768, 772, 773, 774,
775, 778, 784, 843, 844, 848, 851, 852, 853, 854,
855, 856, 857, 858, 859, 860, 861, 862, 863, 864,
865, 868, 869, 871, 872, 873, 891)
Having an IN clause with so many options, is it bad for query performance? I'm experiencing many timeouts in my application, and I believe it could be a source of this kind of problem. Can I optimize the query without removing the numbers, using any good SQL Hint?
EDIT:
@KM these are keys in a different table. This is a forum application, explaining briefly: c# gets all forums from database and stores it in app cache. Before C# calls a procedure that gets the threads for these forums and for this user, c# does some logic filtering the "all forums" collection, considering permissions and some business logic. The timeout happens on database and not on application itself. Doing all this logic on the query will require a lot of inner joins and I'm not 100% sure I can do all this inside the procedure.
I'm using SQL Server 2000
There are several considerations when writing a query using the IN operator that can have an effect on performance.
First, IN clauses are generally internally rewritten by most databases to use the OR logical connective. So col IN ('a','b','c')
is rewritten to: (COL = 'a') OR (COL = 'b') or (COL = 'c')
. The execution plan for both queries will likely be equivalent assuming that you have an index on col
.
Second, when using either IN or OR with a variable number of arguments, you are causing the database to have to re-parse the query and rebuild an execution plan each time the arguments change. Building the execution plan for a query can be an expensive step. Most databases cache the execution plans for the queries they run using the EXACT query text as a key. If you execute a similar query but with different argument values in the predicate - you will most likely cause the database to spend a significant amount of time parsing and building execution plans. This is why bind variables are strongly recommended as a way to ensure optimal query performance.
Third, many database have a limit on the complexity of queries they can execute - one of those limits is the number of logical connectives that can be included in the predicate. In your case, a few dozen values are unlikely to reach the built-in limit of the database, but if you expect to pass hundreds or thousands of value to an IN clause - it can definitely happen. In which case the database will simply cancel the query request.
Fourth, queries that include IN and OR in the predicate cannot always be optimally rewritten in a parallel environment. There are various cases where parallel server optimization do not get applied - MSDN has a decent introduction to optimizing queries for parallelism. Generally though, queries that use the UNION ALL operator are trivially parrallelizable in most databases - and are preferred to logical connectives (like OR and IN) when possible.
You can try creating a temporary table, insert your values to it and use the table instead in the IN
predicate.
AFAIK, SQL Server 2000
cannot build a hash table of the set of constants, which deprives the optimizer of possibility to use a HASH SEMI JOIN
.
This will help only if you don't have an index on FieldW
(which you should have).
You can also try to include your FieldX
and FieldY
columns into the index:
CREATE INDEX ix_a_wxy ON a (FieldW, FieldX, FieldY)
so that the query could be served only by using the index.
SQL Server 2000
lacks INCLUDE
option for CREATE INDEX
and this may degrade DML
performance a little but improve the query performance.
Update:
From your execution plan I see than you need a composite index on (SettingsID, SectionID)
SQL Server 2000
indeed can built a hash table out of a constant list (and does it), but Hash Semi Join
most probably will be less efficient than a Nested Loop
for query query.
And just a side note: if you need to know the count of rows satisfying the WHERE
condition, don't use COUNT(column)
, use COUNT(*)
instead.
A COUNT(column)
does not count the rows for which the column
value is NULL
.
This means that, first, you can get the results you didn't expect, and, second, the optimizer will need to do an extra Key Lookup
/ Bookmark Lookup
if your column is not covered by an index that serves the WHERE
condition.
Since ThreadId
seems to be a CLUSTERED PRIMARY KEY
, it's all right for this very query, but try to avoid it in general.
If you have a good index on FieldW, using that IN is perfectly right.
I have just tested and SQL 2000 does a Clustered Index Scan when using the IN.
Depending on your data distribution, additional predicates in your WHERE clause may improve performance. For example, if the set of ids is small relative to the total number in the table, and you know that the ids are relatively close together (perhaps they will usually be recent additions, and therefore clustered at the high end of the range), you could try and include the predicate "AND FieldW BETWEEN 109 AND 891" (after determining the min & max id in your set in the C# code). It may be that doing a range scan on those columns (if indexed) works faster than what is currently being used.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With