Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do I speed up deletes from a large database table?

Here's the problem I am trying to solve: I have recently completed a data layer re-design that allows me to load-balance my database across multiple shards. In order to keep shards balanced, I need to be able to migrate data from one shard to another, which involves copying from shard A to shard B, and then deleting the records from shard A. But I have several tables that are very big, and have many foreign keys pointed to them, so deleting a single record from the table can take more than one second.

In some cases I need to delete millions of records from the tables, and it just takes too long to be practical.

Disabling foreign keys is not an option. Deleting large batches of rows is also not an option because this is a production application and large deletes lock too many resources, causing failures. I'm using Sql Server, and I know about partitioned tables, but the restrictions on partitioning (and the license fees for enterprise edition) are so unrealistic that they are not possible.

When I began working on this problem I thought the hard part would be writing the algorithm that figures out how to delete rows from the leaf level up to the top of the data model, so that no foreign key constraints get violated along the way. But solving that problem did me no good since it takes weeks to delete records that need to disappear overnight.

I already built in a way to mark data as virtually deleted, so as far as the application is concerned, the data is gone, but I'm still dealing with large data files, large backups, and slower queries because of the sheer size of the tables.

Any ideas? I have already read older related posts here and found nothing that would help.

like image 930
Eric Z Beard Avatar asked Jul 21 '09 12:07

Eric Z Beard


People also ask

What is the fastest way to delete the data of a table?

TRUNCATE TABLE is functionally identical to DELETE statement with no WHERE clause: both remove all rows in the table. But TRUNCATE TABLE is faster and uses fewer system and transaction log resources than DELETE.

What is the best method to delete a table having huge data say 100k records?

If you want to delete the records of a table with a large number of records but keep some of the records, You can save the required records in a similar table and truncate the main table and then return the saved records to the main table.

How do you delete 10000 records in SQL?

If you need to remove 10 million rows and have 1 GB of log space available use Delete TOP(10000) From dbo. myTable (with your select clause) and keep running it till there are no more rows to delete.


1 Answers

Please see: Optimizing Delete on SQL Server

This MS support article might be of interest: How to resolve blocking problems that are caused by lock escalation in SQL Server:

Break up large batch operations into several smaller operations. For example, suppose you ran the following query to remove several hundred thousand old records from an audit table, and then you found that it caused a lock escalation that blocked other users:

DELETE FROM LogMessages WHERE LogDate < '2/1/2002'     

By removing these records a few hundred at a time, you can dramatically reduce the number of locks that accumulate per transaction and prevent lock escalation. For example:

SET ROWCOUNT 500 delete_more:      DELETE FROM LogMessages WHERE LogDate < '2/1/2002' IF @@ROWCOUNT > 0 GOTO delete_more SET ROWCOUNT 0 

Reduce the query's lock footprint by making the query as efficient as possible. Large scans or large numbers of Bookmark Lookups may increase the chance of lock escalation; additionally, it increases the chance of deadlocks, and generally adversely affects concurrency and performance.

like image 52
Mitch Wheat Avatar answered Sep 19 '22 13:09

Mitch Wheat