8/19/2014 - This question is still open. It seems now this is happening even though I have not run the stored procedure. I don't have enough experience yet to know what kinds of things will cause a SQL server to have slow downs like this.
I have a problem that seems weird to me. Two things that seem unrelated have correlated twice now.
First, I have a stored procedure in SQL Server 2008 Express R2 that is wrapped in a transaction (code below). Inside the transaction,
The database is on a dedicated hosting server.
Second, I have a website page that loads three HTML tables via AJAX. The code behind is reading from the same database that the Log tables are on, but none of the DB tables overlap. The queries are executed via LINQ-to-Entities using Entity Framework 6. They are somewhat complex queries, but under normal circumstances, each of the HTML tables take less than 5 seconds to load once the AJAX call is kicked off. The website is on a shared hosting server.
NO OTHER QUERIES ON THE SITE ARE SLOW. Only these 2.
For some reason, after I execute the stored procedure, the AJAX queries on the webpage run so long that they timeout and no data loads. I had to open the timeout window to 180 seconds for them to even have a chance to load before timeout.
The only way I am able to get the queries back to "normal" speed is to STOP the SQL Server service, and START it again. After, the webpage loads quickly.
My concern is that the stored procedure that I created is causing something to get hung up in the SQL Server, or memory issues, or something.
Here is the stored procedure:
USE [Main]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
-- =============================================
-- Author: Jason Watts
-- Create date: 7/30/2014
-- Description: Archives every record in ErrorLogs
-- older than 2 months into the
-- ErroLogsArchive table.
-- =============================================
ALTER PROCEDURE [dbo].[ArchiveErrorLogs]
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Init variables and error code
Declare @TransactionCountOnEntry int;
Declare @ErrorCode int;
Select @ErrorCode = @@Error;
-- Begin transaction processing for rollback option
If @ErrorCode = 0
Begin
Select @TransactionCountOnEntry = @@TranCount;
BEGIN TRANSACTION;
End
-- Get the current date/time
Declare @ArchiveDT datetime;
SELECT @ArchiveDT = DATEADD(m, -2, GETDATE());
-- Copy the records from ErrorLogs to ErrorLogsArchive
If @ErrorCode = 0
Begin
INSERT INTO ErrorLogsArchive (Area,Library,Method,ErrorLevel,[Message],Stacktrace,LogDate)
SELECT Area,Library,Method,ErrorLevel,[Message],Stacktrace,LogDate
FROM ErrorLogs
WHERE LogDate <= @ArchiveDT;
Select @ErrorCode = @@ERROR;
End
-- Delete the records from ErrorLogs
If @ErrorCode = 0
Begin
DELETE FROM ErrorLogs
WHERE LogDate <= @ArchiveDT;
Select @ErrorCode = @@ERROR;
End
-- End the transaction
If @@TranCount > @TransactionCountOnEntry
Begin
If @ErrorCode = 0
COMMIT TRANSACTION;
Else
ROLLBACK TRANSACTION;
End
-- Return
Return @ErrorCode;
END
The original log table had 1.5 million records before I ran the procedure this morning. On the first go I was moving where older than 3 months, and it moved 700,000 records successfully. Though it took about 5 1/2 minutes to run. Then I bumped it down to 2 months, and it moved another 200,000 records in about 1 1/2 minutes.
After each run, the webpage table loads slowed to a crawl, then returned to normal after restarting the SQL Server service. I don't understand what could be causing the slow down. If it had not happened twice, I would not have thought them to be related.
update
As I mentioned above, the same 2 queries are now running slow and I have not performed the archive function. I feel that there is some simple issue here, I just don't know what it could be.
Read up on Ghost cleanup in depth
In short, delete operations do NOT delete/remove rows. The "deleted" rows are marked for deletion, and then (gradually), the "memory" is reallocated 1-10 pages at a time. This is an "optimization" of SQL Server.
This antiquated garbage collecting is complicated further by the presence of an Execution Plan (read up) which tries to "hold on" to the deleted rows. The result is problematic Cluster F that resembles, but isn't quit, a dead-lock.
you can delete an execution plan using DBCC FREEPROCCACHE
To make matters worse, there is a known bug with SQL Server failing to properly handle Ghosted Records in the Millions
ghost_record_count values keep increasing in SQL Server 2008 R2 or in SQL Server 2008
I suspect that, in addition to flushing all Execution Plans, a newly started SQL Server places higher importance on immediate "cleanup" of ghost images.
From SQLMag, When a Delete isn’t Really a Delete
So when do the deleted records get removed? The act of deleting a record doesn’t put that record on the queue of work for the ghost cleanup task. It’s not until the database page on which the deleted record resides is read again by the SQL Server Storage Engine that the page will be queued up for definite ghost record removal. The ghost cleanup task will look at its queue of things to do, and if it is empty, it will process part of a database on an instance to see if the database has ghost records to be removed. The only way to make sure that the space used by a ghost record is freed up quickly is to do something such as a table scan or an index rebuild so that all pages in the table are accessed by the Storage Engine and ghost records are queued for removal.
Seems the tables affected were on same "page" as log table.
I gather from this, that there are options. A table scan may help.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With