Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

SQL Server - Dirty Reads Pros & Cons

Tags:

sql

sql-server

Why should I or shouldn't I use dirty reads:

set transaction isolation level read uncommitted

in SQL Server?

like image 253
Seibar Avatar asked Aug 21 '08 19:08

Seibar


People also ask

What is dirty read under what circumstances this become useful?

Dirty Reads A dirty read occurs when a transaction reads data that has not yet been committed. For example, suppose transaction 1 updates a row. Transaction 2 reads the updated row before transaction 1 commits the update.

Why dirty read is a problem?

One of the most common problems that occur while running concurrent transactions is the Dirty Read problem. A dirty read occurs when one transaction is permitted to read data that is being modified by another transaction that is running concurrently but which has not yet committed itself.

How do you handle a dirty SQL read?

When a transaction is allowed to read a row that has been modified by an another transaction which is not committed yet that time Dirty Reads occurred. It is mainly occurred because of multiple transaction at a time which is not committed.

What do you understand by dirty read in concurrency control?

Dirty read is a read of uncommitted data. If a particular row is modified by another running application and not yet committed, we also run an application to read the same row with the same uncommitted data. This is the state we say it as a dirty read.


2 Answers

From MSDN:

When this option is set, it is possible to read uncommitted or dirty data; values in the data can be changed and rows can appear or disappear in the data set before the end of the transaction.

Simply put, when you are using this isolation level, and you are performing multiple queries on an active table as part of one transaction, there is no guarantee that the information returned to you within different parts of the transaction will remain the same. You could query the same data twice within one transaction and get different results (this might happen in the case where a different user was updating the same data in the midst of your transaction). This can obviously have severe ramifications for parts of your application that rely on data integrity.

like image 65
Yaakov Ellis Avatar answered Oct 06 '22 09:10

Yaakov Ellis


Generally when you need to do a sizeable (or frequent) queries to busy tables, where read committed would possibly be blocked by locks from uncommited transactions, but ONLY when you can live with inaccurate data.

As an example, on a gaming web site I worked on recently there was a summary display of some stats about recent games, this was all based on dirty reads, it was more important for us to include then exclude the transactional data not yet committed (we knew anyway that few, if any, transactions would be backed out), we felt that on average the data would be more accurate that way.

like image 23
Tokabi Avatar answered Oct 06 '22 07:10

Tokabi