I have a LOGGIN
database and it is quite big - 400 GB. It has millions of rows.
I just ran a delete
statement which took 2.5 hours and deleted probably millions of rows.
delete FROM [DB].[dbo].[table]
where [Level] not in ('info','error')
This is a simple recovery model database. But when I ran the above statement the log files grew to be 800 GB and crashed the server. Why does the LOG file grows for a simple recovery model database?
How can I avoid this in future?
Thanks for your time - RM
I bet you tried to run the whole delete in one transaction. Correct?
Once the transaction is complete, the log space can be reclaimed. Since the transaction never completed, the log file grew until it crashed the server.
Check out my blog entry on How to Delete Large Data.
http://craftydba.com/?p=3079
The key to the solution is the following, SIMPLE recover mode, DELETE in small batches, take FULL backup at end of purge. Select the recovery model that you want at the end.
Here is some sample code to help you on your way.
--
-- Delete in batches in SIMPLE recovery mode
--
-- Select correct db
USE [MATH]
GO
-- Set to simple mode
ALTER DATABASE [MATH] SET RECOVERY SIMPLE;
GO
-- Get count of records
SELECT COUNT(*) AS Total FROM [MATH].[dbo].[TBL_PRIMES];
GO
-- Delete in batches
DECLARE @VAR_ROWS INT = 1;
WHILE (@VAR_ROWS > 0)
BEGIN
DELETE TOP (10000) FROM [MATH].[dbo].[TBL_PRIMES];
SET @VAR_ROWS = @@ROWCOUNT;
CHECKPOINT;
END;
GO
-- Set to full mode
ALTER DATABASE [MATH] SET RECOVERY FULL;
GO
Last but not least, if the amount of remaining data after the delete is real small, it might be quicker to do the following.
1 - SELECT * INTO [Temp Table] WHERE (clause = small data).
2 - DROP [Original Table].
3 - Rename [Temp Table] to [Original Table].
4 - Add any constraints or missing objects.
The DROP table action does not LOG all the data being removed.
Sincerely,
John
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With