Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Adding Conditional around query increases time by over 2400%

Tags:

sql-server

Update: I will get query plan as soon as I can.

We had a poor performing query that took 4 minutes for a particular organization. After the usual recompiling the stored proc and updating statistics didn't help, we re-wrote the if Exists(...) to a select count(*)... and the stored procedure when from 4 minutes to 70 milliseconds. What is the problem with the conditional that makes a 70 ms query take 4 minutes? See the examples

These all take 4+ minutes:

if (
  SELECT COUNT(*)       
    FROM ObservationOrganism  omo
    JOIN Observation          om  ON  om.ObservationID  = omo.ObservationMicID
    JOIN Organism             o   ON  o.OrganismID      = omo.OrganismID
    JOIN ObservationMicDrug   omd ON  omd.ObservationOrganismID = omo.ObservationOrganismID
    JOIN SIRN                 srn ON  srn.SIRNID        = omd.SIRNID
    JOIN OrganismDrug         od  ON  od.OrganismDrugID = omd.OrganismDrugID
  WHERE
    om.StatusCode IN ('F', 'C')
    AND o.OrganismGroupID <> -1
    AND od.OrganismDrugGroupID <> -1
    AND (om.LabType <> 'screen' OR om.LabType IS NULL)) > 0

print 'records';       

-

IF (EXISTS(
  SELECT *       
    FROM ObservationOrganism  omo
    JOIN Observation          om  ON  om.ObservationID  = omo.ObservationMicID
    JOIN Organism             o   ON  o.OrganismID      = omo.OrganismID
    JOIN ObservationMicDrug   omd ON  omd.ObservationOrganismID = omo.ObservationOrganismID
    JOIN SIRN                 srn ON  srn.SIRNID        = omd.SIRNID
    JOIN OrganismDrug         od  ON  od.OrganismDrugID = omd.OrganismDrugID
  WHERE
    om.StatusCode IN ('F', 'C')
    AND o.OrganismGroupID <> -1
    AND od.OrganismDrugGroupID <> -1
    AND (om.LabType <> 'screen' OR om.LabType IS NULL))

print 'records'

This all take 70 milliseconds:

Declare @recordCount INT;
SELECT @recordCount = COUNT(*)       
    FROM ObservationOrganism  omo
    JOIN Observation          om  ON  om.ObservationID  = omo.ObservationMicID
    JOIN Organism             o   ON  o.OrganismID      = omo.OrganismID
    JOIN ObservationMicDrug   omd ON  omd.ObservationOrganismID = omo.ObservationOrganismID
    JOIN SIRN                 srn ON  srn.SIRNID        = omd.SIRNID
    JOIN OrganismDrug         od  ON  od.OrganismDrugID = omd.OrganismDrugID
  WHERE
    om.StatusCode IN ('F', 'C')
    AND o.OrganismGroupID <> -1
    AND od.OrganismDrugGroupID <> -1
    AND (om.LabType <> 'screen' OR om.LabType IS NULL);

IF(@recordCount > 0)
  print 'records';

It doesn't make sense to me why moving the exact same Count(*) query into an if statement causes such degradation or why 'Exists' is slower than Count. I even tried the exists() in a select CASE WHEN Exists() and it is still 4+ minutes.

like image 806
DanCaveman Avatar asked Sep 25 '22 17:09

DanCaveman


1 Answers

Given that my previous answer was mentioned, I'll try to explain again because these things are pretty tricky. So yes, I think you're seeing the same problem as the other question. Namely a row goal issue.

So to try and explain what's causing this I'll start with the three types of joins that are at the disposal of the engine (and pretty much categorically): Loop Joins, Merge Joins, Hash Joins. Loop joins are what they sound like, a nested loop over both sets of data. Merge Joins take two sorted lists and move through them in lock-step. And Hash joins throw everything in the smaller set into a filing cabinet and then look for items in the larger set once the filing cabinet has been filled.

So performance wise, loop joins require pretty much no set up and if you're only looking for a small amount of data they're really optimal. Merge are the best of the best as far as join performance for any data size, but require data to be already sorted (which is rare). Hash Joins require a fair amount of setup but allow large data sets to be joined quickly.

Now we get to your query and the difference between COUNT(*) and EXISTS/TOP 1. So the behavior you're seeing is that the optimizer thinks that rows of this query are really likely (you can confirm this by planning the query without grouping and seeing how many records it thinks it will get in the last step). In particular it probably thinks that for some table in that query, every record in that table will appear in the output.

"Eureka!" it says, "if every row in this table ends up in the output, to find if one exists I can do the really cheap start-up loop join throughout because even though it's slow for large data sets, I only need one row." But then it doesn't find that row. And doesn't find it again. And now it's iterating through a vast set of data using the least efficient means at its disposal for weeding through large sets of data.

By comparison, if you ask for the full count of data, it has to find every record by definition. It sees a vast set of data and picks the choices that are best for iterating through that entire set of data instead of just a tiny sliver of it.

If, on the other hand, it really was correct and the records were very well correlated it would have found your record with the smallest possible amount of server resources and maximized its overall throughput.

like image 63
Tim Tom Avatar answered Oct 11 '22 13:10

Tim Tom