This is a question I asked on another forum which received some decent answers, but I wanted to see if anyone here has more insight.
The problem is that you have one of your pages in a web application timing out when it gets to a stored procedure call, so you use Sql Profiler, or your application trace logs, to find the query and you paste it into management studio to figure our why it's running slow. But you run it from there and it just blazes along, returning in less than a second each time.
My particular case was using ASP.NET 2.0 and Sql Server 2005, but I think the problem could apply to any RDBMS system.
If the query doesn't return any data within the configured time-out value (typically 30 seconds), the application cancels the query and generates one of these error messages: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
When a query hits your server, a plan has to be compiled. To save time and resources later, an execution plan is cached based on the estimated rows that parameter will cause your code to process and return.
How to overcome it? You need to configure timeout parameter in SQL server settings as shown on the screenshot. 'Remote query timeout' parameter must be set in 0 instead of 600.
SQL Server Management Studio Activity Monitor Scroll down to the SPID of the process you would like to kill. Right click on that line and select 'Kill Process'. A popup window will open for you to confirm that you want to kill the process.
This is what I've learned so far from my research.
.NET sends in connection settings that are not the same as what you get when you log in to management studio. Here is what you see if you sniff the connection with Sql Profiler:
-- network protocol: TCP/IP set quoted_identifier off set arithabort off set numeric_roundabort off set ansi_warnings on set ansi_padding on set ansi_nulls off set concat_null_yields_null on set cursor_close_on_commit off set implicit_transactions off set language us_english set dateformat mdy set datefirst 7 set transaction isolation level read committed
I am now pasting those setting in above every query that I run when logged in to sql server, to make sure the settings are the same.
For this case, I tried each setting individually, after disconnecting and reconnecting, and found that changing arithabort from off to on reduced the problem query from 90 seconds to 1 second.
The most probable explanation is related to parameter sniffing, which is a technique Sql Server uses to pick what it thinks is the most effective query plan. When you change one of the connection settings, the query optimizer might choose a different plan, and in this case, it apparently chose a bad one.
But I'm not totally convinced of this. I have tried comparing the actual query plans after changing this setting and I have yet to see the diff show any changes.
Is there something else about the arithabort setting that might cause a query to run slowly in some cases?
The solution seemed simple: Just put set arithabort on into the top of the stored procedure. But this could lead to the opposite problem: change the query parameters and suddenly it runs faster with 'off' than 'on'.
For the time being I am running the procedure 'with recompile' to make sure the plan gets regenerated each time. It's Ok for this particular report, since it takes maybe a second to recompile, and this isn't too noticeable on a report that takes 1-10 seconds to return (it's a monster).
But it's not an option for other queries that run much more frequently and need to return as quickly as possible, in just a few milliseconds.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With