Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Minimum transaction isolation level to avoid "Lost Updates"

With SQL Server's transaction isolation levels, you can avoid certain unwanted concurrency issues, like dirty reads and so forth.

The one I'm interested in right now is lost updates - the fact two transactions can overwrite one another's updates without anyone noticing it. I see and hear conflicting statements as to which isolation level at a minimum I have to choose to avoid this.

Kalen Delaney in her "SQL Server Internals" book says (Chapter 10 - Transactions and Concurrency - Page 592):

In Read Uncommitted isolation, all the behaviors described previously, except lost updates, are possible.

On the other hand, an independent SQL Server trainer giving us a class told us that we need at least "Repeatable Read" to avoid lost updates.

So who's right?? And why??

like image 706
marc_s Avatar asked Nov 20 '11 12:11

marc_s


People also ask

Which transaction isolation level does not have lost updates?

Both Read Uncommitted and Read Committed Transaction Isolation Levels have the Lost Update Concurrency Problem. The other Isolation Levels such as Repeatable Read, Snapshot, and Serializable do not have the Lost Update Concurrency Problem.

How do I stop losing my updates?

To prevent lost updates transactions must be run at an isolation level of at least repeatable read, combined with a transaction block in the application layer.

What is the lowest isolation level allowed by SQL?

The lowest isolation level, read uncommitted, can retrieve data that has been modified but not committed by other transactions. All concurrency side effects can happen in read uncommitted, however there's no read locking or versioning, so overhead is minimized.

Which isolation level will prevent all read problems?

Read Committed – This isolation level guarantees that any data read is committed at the moment it is read. Thus it does not allow dirty read. The transaction holds a read or write lock on the current row, and thus prevents other transactions from reading, updating, or deleting it.


3 Answers

I dont know if it is too late to answer but I am just learning about transaction isolation levels in college and as part of my research I came across this link:

Microsoft Technet

Specifically the paragraph in question is:

Lost Update

A lost update can be interpreted in one of two ways. In the first scenario, a lost update is considered to have taken place when data that has been updated by one transaction is overwritten by another transaction, before the first transaction is either committed or rolled back. This type of lost update cannot occur in SQL Server 2005 because it is not allowed under any transaction isolation level.

The other interpretation of a lost update is when one transaction (Transaction #1) reads data into its local memory, and then another transaction (Transaction #2) changes this data and commits its change. After this, Transaction #1 updates the same data based on what it read into memory before Transaction #2 was executed. In this case, the update performed by Transaction #2 can be considered a lost update.

So in essence both people are right.

Personally (and I am open to being wrong, so please correct me as I am just learning this) I take from this the following two points:

  1. The whole point of a transaction enviorment is to prevent lost updates as described in the top paragraph. So if even the most basic transaction level cant do that then why bother using it.

  2. When people talk about lost updates, they know the first paragraph applies, and so generally speaking mean the second type of lost update.

Again, please correct me if anything here is wrong as I would like to understand this too.

like image 182
Francis Rodgers Avatar answered Oct 04 '22 08:10

Francis Rodgers


The example in the book is of Clerk A and Clerk B receiving shipments of Widgets.

They both check the current inventory, see 25 is in stock. Clerk A has 50 widgets and updates to 75, Clerk B has 20 widgets and so updates to 45 overwriting the previous update.

I assume she meant this phenomena can be avoided at all isolation levels by Clerk A doing

UPDATE Widgets
SET StockLevel = StockLevel + 50
WHERE ...

and Clerk B doing

UPDATE Widgets
SET StockLevel = StockLevel + 20
WHERE ...

Certainly if the SELECT and UPDATE are done as separate operations you would need repeatable read to avoid this so the S lock on the row is held for the duration of the transaction (which would lead to deadlock in this scenario)

like image 25
Martin Smith Avatar answered Oct 04 '22 08:10

Martin Smith


Lost updates may occur even if reads and writes are in separate transactions, like when users read data into Web pages, then update. In such cases no isolation level can protect you, especially when connections are reused from a connection pool. We should use other approaches, such as rowversion. Here is my canned answer.

like image 32
A-K Avatar answered Oct 04 '22 09:10

A-K