The activity monitor in sql2k8 allows us to see the most expensive queries. Ok, that's cool, but is there a way I can log this info or get this info via query analyser? I don't really want to have the Sql Management console open and me looking at the activity monitor dashboard.
I want to figure out which queries are poorly written/schema is poorly designed, etc.
Thanks heaps for any help!
The Active Expensive and Recent Expensive queries will give you information about the queries which have high CPU, Logical Reads or High Elapsed time. You can go to each section for Current or Recent expensive queries. Sort them by Elapsed time, Logical Read and CPU Time one by one and check the execution plan.
In summary, I would expect updates on average to be the most expensive operation.
Causes of expensive queriesA lack of relevant indexes, causing slow lookups on large tables. Unused indexes, causing slow INSERT , UPDATE , and DELETE operations. An inefficient schema leading to bad queries. Inefficiently designed queries.
Use SQL Server Profiler (on the tools menu in SSMS) to create a trace that logs these events:
RPC:Completed SP:Completed SP:StmtCompleted SQL:BatchCompleted SQL:StmtCompleted
You can start with the standard trace template and prune it. You didn't specify whether this was for a specific database or the whole server, if it is for specific Db's, include the DatabaseID column and set a filter to your DB (SELECT DB_ID('dbname')
). Make sure the logical Reads data column is included for each event. Set the trace to log to a file. If you are leaving this trace to run unattended in the background, it is a good idea to set a maximum trace file size say 500MB or 1GB if you have plenty of room (it all depends on how much activity there is on the server, so you will have to suck it and see).
Briefly start the trace and then pause it. Goto File->Export->Script Trace Definition and pick your DB version, and save to a file. You now have a sql script that creates a trace that has much less overhead than running through the profiler GUI. When you run this script it will output the Trace ID (usually @ID=2
); note this down.
Once you have a trace file (.trc) (either the trace completed due to reaching the max file size or you stopped the running trace using
EXEC sp_trace_setstatus @ID, 0
EXEC sp_trace_setstatus @ID, 2
You can load the trace into profiler, or use ClearTrace (very handy) or load it into a table like so:
SELECT * INTO TraceTable FROM ::fn_trace_gettable('C:\location of your trace output.trc', default)
Then you can run a query to aggregate the data such as this one:
SELECT COUNT(*) AS TotalExecutions, EventClass, CAST(TextData as nvarchar(2000)) ,SUM(Duration) AS DurationTotal ,SUM(CPU) AS CPUTotal ,SUM(Reads) AS ReadsTotal ,SUM(Writes) AS WritesTotal FROM TraceTable GROUP BY EventClass, CAST(TextData as nvarchar(2000)) ORDER BY ReadsTotal DESC
Once you have identified the costly queries, you can generate and examine the actual execution plans.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With