Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Minimally Logged Insert Into

I have an INSERT statement that is eating a hell of a lot of log space, so much so that the hard drive is actually filling up before the statement completes.

The thing is, I really don't need this to be logged as it is only an intermediate data upload step.

For argument's sake, let's say I have:

  • Table A: Initial upload table (populated using bcp, so no logging problems)
  • Table B: Populated using INSERT INTO B from A

Is there a way that I can copy between A and B without anything being written to the log?

P.S. I'm using SQL Server 2008 with simple recovery model.

like image 947
Karl Avatar asked Jun 07 '11 14:06

Karl


People also ask

Is select into minimally logged?

Conclusions. There are no minimally logged operations in the FULL recovery model. As the 'SELECT … INTO' creates the destination table, it exclusively owns that table and is quicker compared to the 'INSERT …

What is minimally logged in SQL Server?

Minimal logging involves logging only the limited information required to rollback a transaction. Minimal logging doesn't support PITR (Point-in-time recovery). In other words, one cannot restore a T-log backup to a certain point-in-time if the backup file contains any bulk-logged transactions.

What is the difference between insert and bulk insert?

In case of BULK INSERT, only extent allocations are logged instead of the actual data being inserted. This will provide much better performance than INSERT. The actual advantage, is to reduce the amount of data being logged in the transaction log.

What is Tablock in SQL Server?

Using TABLOCK will reduce concurrency but will immediately take a table lock on the target table. As long as you can guarantee that just one session will insert into the table this will avoid unnecessary row or page locks and will prevent lock escalation.


1 Answers

From Louis Davidson, Microsoft MVP:

There is no way to insert without logging at all. SELECT INTO is the best way to minimize logging in T-SQL, using SSIS you can do the same sort of light logging using Bulk Insert.

From your requirements, I would probably use SSIS, drop all constraints, especially unique and primary key ones, load the data in, add the constraints back. I load about 100GB in just over an hour like this, with fairly minimal overhead. I am using BULK LOGGED recovery model, which just logs the existence of new extents during the logging, and then you can remove them later.

The key is to start with barebones tables, and it just screams. Building the index once leaves you will no indexes to maintain, just the one index build per index.

If you don't want to use SSIS, the point still applies to drop all of your constraints and use the BULK LOGGED recovery model. This greatly reduces the logging done on INSERT INTO statements and thus should solve your issue.

http://msdn.microsoft.com/en-us/library/ms191244.aspx

like image 73
IAmTimCorey Avatar answered Oct 21 '22 13:10

IAmTimCorey