My first post here, please be gentle. =)
I work for a company that inherited the maintenance of a bespoke system used by one of our customers. The previous developer (no longer with us) encrypted all the database objects (WITH ENCRYPTION).
The system has been plagued with various timeout issues well before we took ownership of it, and we want to get to the bottom of these.
The database is on SQL Express 2005 in production. We want to run the profiler on it but because the various objects are encrypted, most stored procedure calls etc.. show up as '-- Encrypted Text'.
Not very useful. I've written a little console app in C# to decrypt all the database objects, which works perfectly as far as I can tell
It finds all encrypted objects in the database and for each one, decrypts it, removes the with encryption clause, drops the original and recreates it using the new 'without encryption' text.
There are some computed columns that get dropped before trying to decrypt the functions that are used in their definitions, then get recreated.
What I'm finding is that once everything is decrypted, I can't get into the system because the stored procedures etc.. take far too long to run on their first call. Execution plans are being compiled for the first time, so some delay is understandable, but we're talking 1 minute plus.. after 30 seconds the command timeout is hit, so the plans never get compiled.
I also have the same issue if I drop and recreate the database objects using their original scripts (keeping the WITH ENCRYPTION clause in).
So there's some consistency there. However, what absolutely mystifies me is that if I drop the execution plans from the original copy of the database (which was created from a backup of the production database), the same stored procedures are much faster. 10 seconds for first call. As far as I can tell, the stored procedures, functions etc.. are the same.
From my testing, I don't think it's a particular procedure or function that is causing the problem. It seems like the delay is cumulative, the more objects I drop & recreate the slower things are.
I've taken a few random stabs in the dark, rebuilding indexes and updating stats - this has had no effect at all.
We could write something to execute all 540 functions, triggers, sprocs etc.. to pre-empt the first real call from a user, however once SQL server is restarted (and our client does restart their server from time to time) the execution plans will be dropped and we'd need to run the same tool again. To me it doesn't seem a viable option (neither is increasing the CommandTimeout property), I want to know why I'm seeing this behaviour.
I've been using sys.dm_exec_query_plan and sys.dm_exec_sql_text to look at the execution plans, and using DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE as part of my testing.
I'm totally stumped, please help me before I jump out the office window.
Thanks in advance,
Andy.
--EDIT--
I don't quite know how I missed it, but the Activity Monitor is showing a session being blocked by a recompile of a table valued function. It takes far too long to compile and the blocked query hits the timeout.
I don't understand why in original version of the database (restored from backup taken from the customer site), the compilation takes around 10 seconds, but after dropping and recreating these objects in the same database, the table valued function takes almost a minute to compile.
I've tried truncating the log, which didn't have any effect. I still need to have a look at the file sizes.
-- Another edit --
The TVF returns a temporary table, and has 12 outer joins in the query, all on either sys.server_principals or sys.database_role_members.
I seem to remember reading something about recompiles and temporary tables, which I'll have to check again..
It makes testing much more difficult. It also makes it more difficult to maintain the code, because SQL tends to be less readable as it gets more complex. Versioning may be a problem, even if the script is stored in the Visual Studio project using GIT, anyone can alter the procedure at any time, and no one may notice.
Use of stored procedures can reduce network traffic between clients and servers, because the commands are executed as a single batch of code. This means only the call to execute the procedure is sent over a network, instead of every single line of code being sent individually.
DROP PROCEDURE removes the definition of one or more existing procedures. To execute this command the user must be the owner of the procedure(s). The argument types to the procedure(s) usually must be specified, since several different procedures can exist with the same name and different argument lists.
You said yourself that (computed) columns were dropped. Is it possible that other stuff was manipulated in the tables? If so, you will probably want to reindex your tables, (which will update the tables' statistics as well) using a command such as:
Exec sp_msForEachTable @COMMAND1= 'DBCC DBREINDEX ( "?")'
...though it sounds like you've done something like this. Either way, I recommend doing it once you make such a big change to all of those objects.
Third recommendation: While you are waiting for your procs to execute, run an sp_who2 on the database to make sure nothing is blocking your queries. It's quite possible that you might have some sort of long-lived transaction happening that you haven't accounted for.
Fourth recommendation: Make sure your server has enough memory. Make sure your transaction log files and datafiles aren't auto-growing after all of those big index and object updates. That can take FOREVER to happen, especially on under-spec'ed hardware like you may have running SQL Express.
Fifth recommendation: Run a SQL Server Profiler trace against the database and look at what statements are starting specifically, and which are timing out. "Zoom in" on those and analyze them piece by piece and see what's up. This will likely just take a lot of good ol' hard work to fully understand.
In summary, the act of dropping and recreating procs itself shouldn't cause this slowdown if the statistics and indexes they were initially built against are sufficiently similar to what they are now. It's likely that you will find that there's Something Else happening which isn't necessarily directly related to changing the proc definitions themselves.
Another shot in the dark: Were the computed columns which you had to drop originally persisted (and not persisted after recreation) or vice versa?
If the functions called in the computation are complex or expensive, persisted columns are very advantageous and might be responsible for the behavior you are seeing.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With