Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Locking to handle concurrency-- a good idea? [closed]

In order to handle concurrency issue, is locking-- any form of locking, whether it's row, table or database locking a good solution?

If not, how to handle concurrency issue?

like image 553
Graviton Avatar asked Oct 22 '08 12:10

Graviton


People also ask

What is the purpose of locking in concurrency control?

A lock is a data variable which is associated with a data item. This lock signifies that operations that can be performed on the data item. Locks in DBMS help synchronize access to the database items by concurrent transactions. All lock requests are made to the concurrency-control manager.

How can concurrency be prevented?

The main way we can avoid such concurrency issues and build reliable code is to work with immutable objects. This is because their state cannot be modified by the interference of multiple threads. However, we can't always work with immutable objects.

What happens when there is no concurrency control?

If concurrency control is not maintained, three serious problems may be caused by concurrent transaction execution: lost updates, uncommitted data, and inconsistent retrievals.


1 Answers

If you believe Oracle, no, not at all. That's because Oracle went to great lengths to avoid it.

The problem is that readers can block writers and writers block readers, and a writer has to wait until all readers have finished with a row before it can write. That delays the writing process and its caller. Exclusive locks (for writing) are held to the end of the transaction, in case the transaction has to be rolled back - this stops other transactions seeing the new value until the transaction commits.

In practice locking is generally fine if there isn't too much contention, the same as with any concurrent programming. If there's too much contention for a row/page/table (not many database servers do whole-DB locking), it will cause the transactions to execute in turn rather than concurrently.

Oracle uses row-versioning, where instead of locking a row to write it, a new version of the row is created instead. Readers that need to repeat their reads remember which version of the row they read. However, an error will occur if a reader that's remembering its reads tries to update a row that has been updated by another writer since this transaction read it; this is to stop lost updates. To ensure you can update a row, you have to say that the SELECT is FOR UPDATE; if you do that, it takes a lock - only one transaction can hold a row FOR UPDATE at a time, and a conflicting transaction has to wait.

SQL Server 2005 and later support Snapshot Isolation, which is their name for row-versioning. Again, you should ask for update locks if you need to update some data you just read - in SQL Server, use WITH (UPDLOCK).

A further problem with locking is the likelihood of deadlocks. This is simply where two transactions each hold a lock on a resource the other needs, or in general a cycle of transactions hold locks that each other need to progress. The database server will generally detect this deadlock and kill one of the transactions, rolling it back - you then need to retry the operation. Any situation where you have multiple concurrent transactions modifying the same rows has potential for deadlock. The deadlock will occur if the rows are touched in a different order; it's very hard to enforce the order that the database server will use (generally you want the optimizer to pick the fastest order, which won't necessarily be consistent across different queries).

Generally I would suggest the same as with threading - go with locks until you can prove that they're causing a scalability problem, then work out how to make the most critical sections lock-free.

like image 84
Mike Dimmick Avatar answered Oct 23 '22 14:10

Mike Dimmick