Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How Can I Log and Find the Most Expensive Queries?

The activity monitor in sql2k8 allows us to see the most expensive queries. Ok, that's cool, but is there a way I can log this info or get this info via query analyser? I don't really want to have the Sql Management console open and me looking at the activity monitor dashboard.

I want to figure out which queries are poorly written/schema is poorly designed, etc.

Thanks heaps for any help!

like image 395
Pure.Krome Avatar asked Nov 03 '08 04:11

Pure.Krome


People also ask

How do you find expensive queries in SQL?

The Active Expensive and Recent Expensive queries will give you information about the queries which have high CPU, Logical Reads or High Elapsed time. You can go to each section for Current or Recent expensive queries. Sort them by Elapsed time, Logical Read and CPU Time one by one and check the execution plan.

What is the most costly operation in SQL?

In summary, I would expect updates on average to be the most expensive operation.

What makes a query expensive?

Causes of expensive queriesA lack of relevant indexes, causing slow lookups on large tables. Unused indexes, causing slow INSERT , UPDATE , and DELETE operations. An inefficient schema leading to bad queries. Inefficiently designed queries.


1 Answers

  1. Use SQL Server Profiler (on the tools menu in SSMS) to create a trace that logs these events:

     RPC:Completed  SP:Completed  SP:StmtCompleted  SQL:BatchCompleted  SQL:StmtCompleted 
  2. You can start with the standard trace template and prune it. You didn't specify whether this was for a specific database or the whole server, if it is for specific Db's, include the DatabaseID column and set a filter to your DB (SELECT DB_ID('dbname')). Make sure the logical Reads data column is included for each event. Set the trace to log to a file. If you are leaving this trace to run unattended in the background, it is a good idea to set a maximum trace file size say 500MB or 1GB if you have plenty of room (it all depends on how much activity there is on the server, so you will have to suck it and see).

  3. Briefly start the trace and then pause it. Goto File->Export->Script Trace Definition and pick your DB version, and save to a file. You now have a sql script that creates a trace that has much less overhead than running through the profiler GUI. When you run this script it will output the Trace ID (usually @ID=2); note this down.

  4. Once you have a trace file (.trc) (either the trace completed due to reaching the max file size or you stopped the running trace using

    EXEC sp_trace_setstatus @ID, 0
    EXEC sp_trace_setstatus @ID, 2

You can load the trace into profiler, or use ClearTrace (very handy) or load it into a table like so:

SELECT * INTO TraceTable FROM ::fn_trace_gettable('C:\location of your trace output.trc', default) 

Then you can run a query to aggregate the data such as this one:

SELECT COUNT(*) AS TotalExecutions,      EventClass, CAST(TextData as nvarchar(2000))  ,SUM(Duration) AS DurationTotal  ,SUM(CPU) AS CPUTotal  ,SUM(Reads) AS ReadsTotal  ,SUM(Writes) AS WritesTotal FROM TraceTable GROUP BY EventClass, CAST(TextData as nvarchar(2000)) ORDER BY ReadsTotal DESC 

Once you have identified the costly queries, you can generate and examine the actual execution plans.

like image 180
Mitch Wheat Avatar answered Oct 08 '22 14:10

Mitch Wheat