Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Huge transaction log with SQL Server database in simple recovery mode

Tags:

sql-server

People also ask

Can you take transaction log backup in simple recovery model?

With the simple recovery model, you can only perform full and differential backups. Because the simple recovery model doesn't support using transaction log backups, you can only restore a database to the point-in-time when a full or differential backup has completed.

Why is my SQL transaction log so big?

Large database transactions, such as importing large amounts of data, can lead to a large transaction log file. Transaction log backups not happening fast enough causes the SQL log file to become huge. SQL log files also enlarge due to incomplete replication or availability group synchronization.

How do I reduce the size of my SQL Server transaction log?

To shrink a data or log file. In Object Explorer, connect to an instance of the SQL Server Database Engine and then expand that instance. Expand Databases and then right-click the database that you want to shrink. Point to Tasks, point to Shrink, and then select Files.


It means you once had a single transaction that lasted for so long that it forced the log to grow 410GB. The log cannot be reused if there is an active transaction since the rollback information cannot be erased. Such an example would be if someone open an SSMS query, starts a transaction, updates a record and then goes in vacation. The transaction will be active and force the log to grow until is eventually committed or rolled back. When the transaction eventually ends the used space can finally be reclaimed, leaving a huge empty log file.

Another scenario is if you had about 200GB of data updated in a single transaction. The log will store the before and after images of the changes thus consuming twice the space, and it cannot be reused, again because is all one single transaction.

Update

I neglected to mention Replication that is also a factor that can prevent log truncation. And so is Mirroring, an distributed transaction (technically that is the same as an 'active transaction', but the DTC implication makes it a distinct case). The complete list and explanations is at Factors That Can Delay Log Truncation.


You're missing an argument in dbcc shrinkfile:

dbcc shrinkfile (MyDatabase_log, 20000, TRUNCATEONLY)

NOTRUNCATE is the default, which moves allocated blocks to the beginning of unallocated space. TRUNCATEONLY removes the unallocated space. So if you do a NOTRUNCATE followed by a TRUNCATEONLY, you get one slimmed down log.


If you have only one mdf file and one log file, perhaps the simplest way will be to detach the database, rename the log and reattach the database. SQL Server will create a new log file. After that your huge log file can be safely deleted.

This though will not work if you have multiple data files.


Replication Publisher? Could this be the reason for the huge transaction log?


As said in other responses, Active Transactions and Replications are typical causes for this problem.
Another one, less visible, is Change Data Capture (CDC).

I had a similar problem recently and the procedure which allowed me to free the log was as follow:

  • Disable CDC: EXEC sys.sp_cdc_disable_db
  • Create a publication on an arbitrary table within the database in question
  • Delete this/all publication(s). EXEC sp_removedbreplication 'my_db' is a convenient way to do so.
  • Shrink the log as desired

I'm unsure why the creation/deletion of this dummy/never-used publication was necessary but it was so. Tentatively the database may have had previous publications which were not disposed of properly (it is said to happen frequently with databases restored from a previous backup).

Another useful diagnostic idiom is to check the log_reuse_wait_desc in sys.databases for the offending database. This field read REPLICATION until I completed the above procedure.

SELECT log_reuse_wait_desc, * FROM sys.databases where name = 'my_db'