I am writing a web application where two different users can update a list of things, to do list, for example. I have come to realize that, optimistic locking mechanism works best since I don't expect high contention.
I was looking at transaction isolation levels and now I am a little confused. Looks like different transaction isolation levels also solve similar problems.
How are these two different concepts related to each other? If possible, with a simple example.
There are two models for locking data in a database: Optimistic locking , where a record is locked only when changes are committed to the database. Pessimistic locking , where a record is locked while it is edited.
Pessimistic Locking: When a user accesses an object to update it, the database locks the object until the update is completed. No other user can read or update the object until the first user releases the lock. The database offers this locking type. No Locking: The application does not verify that data is current.
Optimistic locking is a technique for SQL database applications that does not hold row locks between selecting and updating or deleting a row. The application is written to optimistically assume that unlocked rows are unlikely to change before the update or delete operation.
Optimistic concurrency control is based on the idea of conflicts and transaction restart while pessimistic concurrency control uses locking as the basic serialization mechanism. Analytic and simulation models of both mechanisms were developed in order to compare them as far as transaction response time is concerned.
Both of these things are related to data consistency and concurrent access, but they are two different mechanisms.
Locking prevents concurrent access to some object. For example when you attempt to update a todo list item, with pessimistic locking database places a row lock on the record until you either commit or rollback the transaction, so that no other transaction is allowed to update the same record. Optimistic locking is application-side check whether the timestamp/version of a record has changed between fetching and attempting to update it. This is regardless of transaction isolation level.
Transaction isolation is about read consistency.
Take a look at below example, I indicated the query results that differ between transaction isolation levels.
SESSION 1 SESSION 2 -------------------------------- -------------------------------------- SELECT count(*) FROM test; => 10 INSERT INTO test VALUES ('x'); SELECT count(*) FROM test; => 10 with read committed/serializable => 11 with read uncommited (dirty read) COMMIT; SELECT count(*) FROM test; => 10 with serializable => 11 with read uncommitted/read committed
There are four ANSI specified transaction isolation levels (one not mentioned in the example above is "repeatable read"), all of them except serializable are subjects to some anomalies. Note it has nothing to do with locking.
You can take a look at Oracle documentation on this here, the concepts are quite universal.
Finally, your approach to use optimistic locking seems sensible for a web application. Most probably you fetch a list item and update it in two different HTTP requests. It is impossible (or unwise at least) to keep transaction open with explicit lock on the record after the fetch (how do you know whether the second request will arrive at all?) Optimistic locking handles this gracefully.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With