We are trying to diagnose slowness in a complex stored procedure (it has a couple of huge queries).
When we call the SP from ASP.NET, it takes 5 seconds.
When we call it from SQL Management Studio (just using EXEC), it takes 0.05 seconds.
We have tested this behavior consistently in many different ways and circumstances.
This is with C#.NET. The database is MS SQL Server 2012.
The problem is with a web app, but we wrote a small console app as a test harness and the behavior is the same.
1) We calculate the elapsed time in the C#.NET console app like this:
stopwatch.Start();
rdr = cmd.ExecuteReader();
stopwatch.Stop();
2) We compute the elapsed time within the SQL procedure by calling GETDATE() before and after the query, and then storing those times in a little table. We can query that table in SQL Mgmt Studio to see how long the queries inside the SP took.
This way we can see how much time is spent in the SQL versus the whole, and 99% of it is spent in SQL.
But it's hard to debug and improve if it is not also slow in SQL Mgmt Studio.
So my question is, why the difference? Could it be that SQL Mgmt Studio is locking differently than the console application?
Storage of Execution Plan – One of the biggest reasons why you are facing slow procedures in SQL Server is probably because your Execution plan is stored in the cache. To find out if it is in the cache, you need to search it there and see if it exists in the top 10 appearing plans.
Performance. A stored procedure is cached in the server memory, making the code execution much faster than dynamic SQL.
That kind of behaviour often stems from the fact that you get different execution plans from ADO.NET and SSMS. That's because an execution plan must factor in not only the SQL per se but also the context in form of ANSI_NULLS, ARITHABORT and several other settings. So, if these settings are not the same, an execution plan from one environment cannot be used in the other.
With the default settings, everything is the same in SSMS and ADO.NET, except ARITHABORT. This is set to OFF in ADO.NET and ON in SSMS, so to get the same cached query plan as your application you need to set ARITHABORT OFF in SSMS. Now you should see the same performance in SSMS as in the call from the application. See more background info in this nice blog post: http://www.sommarskog.se/query-plan-mysteries.html
If your query returns a lot of data, there is another factor, because per default SSMS reads all that data and displays it before finishing and displayng the total query time. How fast it reads it depends on where you execute SSMS, locally on the server or remote. In case of remote, the data has to be transferred over the network, which is usually slower than a local call. Normally, measuring that transfer time is OK though, because your application does the same. However, SSMS also displays the data and that can take a lot longer than actually loading it. To prevent that, you can disable the data display in SSMS via "Tools->Options->Query Results->SQL Server->Results to Grid->Discard results after execution".
If you still get different behavior, capture the execution plans via SQL Profiler and compare them.
I had a similar issue last year. Try to enable arithabort in your stored procedure: SET ARITHABORT ON
Excerpt from msdn
The default ARITHABORT setting for SQL Server Management Studio is ON. Client applications setting ARITHABORT to OFF can receive different query plans making it difficult to troubleshoot poorly performing queries. That is, the same query can execute fast in management studio but slow in the application. When troubleshooting queries with Management Studio always match the client ARITHABORT setting.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With