What does transaction atomicity in SQL/Spring mean and what does it mean not?
I'm thinking of the following case. Correct me if I am wrong:
This code is incorrect:
@Transactional
public void voteUp(long fooId) {
Foo foo = fooMapper.select(fooId); // SELECT * FROM foo WHERE fooId == #{fooId}
foo.setVotes(foo.getVotes() + 1);
fooMapper.update(foo); // UPDATE foo SET votes = #{votes} (...) WHERE fooId == #{fooId}
}
Even though its transactional it doesn't mean the value of "votes" will be always incremented by one, if voteUp is called concurrently on many machines/in many threads? If it was like this, it would mean that only one transaction can be executed at a time, causing the efficiency to go down (especially if code of voteUp does more stuff in transaction)?
The only correct way to do it is like this (?):
/* not necessarily */ @Transactional
public void voteUp(long fooId) {
fooMapper.voteUp(fooId); // UPDATE foo SET votes = votes + 1 WHERE fooId == #{fooId}
}
In the examples I used myBatis for connection to database, but I think the question stays the same if I used hibernate or plain SQL statements.
The @Transactional annotation is metadata that specifies that an interface, class, or method must have transactional semantics; for example, "start a brand new read-only transaction when this method is invoked, suspending any existing transaction".
The @Transactional annotation makes use of the attributes rollbackFor or rollbackForClassName to rollback the transactions, and the attributes noRollbackFor or noRollbackForClassName to avoid rollback on listed exceptions. The default rollback behavior in the declarative approach will rollback on runtime exceptions.
An atomic transaction is an indivisible and irreducible series of database operations such that either all occurs, or nothing occurs. A guarantee of atomicity prevents updates to the database occurring only partially, which can cause greater problems than rejecting the whole series outright.
The simplest way to enforce atomicity is for the DBMS to refuse to start any transaction until the previous one has committed. Unfortunately, this can be too restrictive, especially if the transaction needs to interact with a user. While one user is dithering, several other users could be served.
Isolation level determines how reliable the view of the data is within the transaction. The most reliable isolation level is serializable (which does impact the database's performance), but the usual default is read-committed:
In this isolation level, a lock-based concurrency control DBMS implementation keeps write locks (acquired on selected data) until the end of the transaction, but read locks are released as soon as the SELECT operation is performed (so the non-repeatable reads phenomenon can occur in this isolation level, as discussed below). As in the previous level, range-locks are not managed.
Putting it in simpler words, read committed is an isolation level that guarantees that any data read is committed at the moment it is read. It simply restricts the reader from seeing any intermediate, uncommitted, 'dirty' read. It makes no promise whatsoever that if the transaction re-issues the read, it will find the same data; data is free to change after it is read.
In the first example, between the select and the update, some other process can change the value of the counter: the select happens, then some other process changes the value of the counter, then the update acts on the changed row.
Changing the isolation level to repeatable-read should make sure that the increment in the first example works correctly. Of course the second example is correct as it stands and is a better solution.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With