I read an article on the blog of Oracle here about JPA and locking modes.
I don't entirely understand the difference between OPTIMISTIC
and OPTIMISTIC_FORCE_INCREMENT
lock mode types.
OPTIMISTIC
mode :
When a user locks an entity with this mode, a check is done on the version field entity (@version
) at the beginning of transaction and a check on the version field is also done at the end of the transaction. If versions are different, the transaction rolls back.
OPTIMISTIC_FORCE_INCREMENT
mode :
When a user chooses this mode, he has to flush() the state of EntityManager into database to increment the version field manually. Thus, all other optimistic transactions will be invalidated (roll back). A check on the version is also done at end of transaction to commit or roll back transaction.
It seems clear but when should I use OPTIMISTIC
versus OPTIMISTIC_FORCE_INCREMENT
modes ? The only criteria that I see is to apply OPTIMISTIC_FORCE_INCREMENT
mode when I want the transaction to have precedence over the others because choosing this mode will roll back all other running transactions (if I understand well mecanism).
Is there other reason to choose this mode rather than OPTIMISTIC
mode?
Thanks
Optimistic locking , where a record is locked only when changes are committed to the database. Pessimistic locking , where a record is locked while it is edited.
To specify a lock on a custom query method of a Spring Data JPA repository, we can annotate the method with @Lock and specify the required lock mode type: @Lock(LockModeType. OPTIMISTIC_FORCE_INCREMENT) @Query("SELECT c FROM Customer c WHERE c.
1.2 Optimistic Locking in HibernateVersion checks the version numbers or the timestamps to detect conflicting updates and to prevent lost updates. In here, A record is only locked while updating and when it is updated, hibernate increments the version count by one.
The Java™ Persistence API (JPA) provides a mechanism for managing persistence and object-relational mapping and functions since the EJB 3.0 specifications. The JPA specification defines the object-relational mapping internally, rather than relying on vendor-specific mapping implementations.
Don't be scared by this long answer. This topic is not simple.
By default JPA impose Read committed isolation level if you don't specify any locking (same behaviour as using LockModeType.NONE
).
Read committed requires non existence of the Dirty read phenomenon. Simply T1 can only see changes made by T2 after T2 commits.
Using optimistic locking in JPA rises isolation level to Repetable reads.
If T1 reads some data at the beginning and at the end of the transaction, Repetable reads assures that T1 sees the same data even if T2 changed the data and committed in the middle of T1.
And here comes the tricky part. JPA achieves Repetable reads in the simplest way possible: by preventing Non-Repetable read phenomenon. JPA is not sophisticated enough to keep snapshots of your reads. It simply prevents second read from happening by rising an exception (if the data has changed from the first read).
You can choose from two optimistic locking options:
LockModeType.OPTIMISTIC
(LockModeType.READ
in JPA 1.0)
LockModeType.OPTIMISTIC_FORCE_INCREMENT
(LockModeType.WRITE
in JPA 1.0)
What's the difference between the two?
Let me illustrate with examples on this Person
entity.
@Entity public class Person { @Id int id; @Version int version; String name; String label; @OneToMany(mappedBy = "person", fetch = FetchType.EAGER) List<Car> cars; // getters & setters }
Now let assume we have one Person named John stored in the database. We read this Person in T1 but change his name to Mike in the second transaction T2.
Without any locking
Person person1 = em1.find(Person.class, id, LockModeType.NONE); //T1 reads Person("John") Person person2 = em2.find(Person.class, id); //T2 reads Person("John") person2.setName("Mike"); //Changing name to "Mike" within T2 em2.getTransaction().commit(); // T2 commits System.out.println(em1.find(Person.class, id).getName()); // prints "John" - entity is already in Persistence cache System.out.println( em1.createQuery("SELECT count(p) From Person p where p.name='John'") .getSingleResult()); // prints 0 - ups! don't know about any John (Non-repetable read)
Optimistic read lock
Person person1 = em1.find(Person.class, id, LockModeType.OPTIMISTIC); //T1 reads Person("John") Person person2 = em2.find(Person.class, id); //T2 reads Person("John") person2.setName("Mike"); //Changing name to "Mike" within T2 em2.getTransaction().commit(); // T2 commits System.out.println( em1.createQuery("SELECT count(p) From Person p where p.name='John'") .getSingleResult()); // OptimisticLockException - The object [Person@2ac6f054] cannot be updated because it has changed or been deleted since it was last read.
LockModeType.OPTIMISTIC_FORCE_INCREMENT
is used when the change is made to other entity (perhaps a non-owned relationship) and we want to preserve integrity. Let me illustrate with John acquiring a new car. Optimistic read lock
Person john1 = em1.find(Person.class, id); //T1 reads Person("John") Person john2 = em2.find(Person.class, id, LockModeType.OPTIMISTIC); //T2 reads Person("John") //John gets a mercedes Car mercedes = new Car(); mercedes.setPerson(john2); em2.persist(mercedes); john2.getCars().add(mercedes); em2.getTransaction().commit(); // T2 commits //T1 doesn't know about John's new car. john1 in stale state. We'll end up with wrong info about John. if (john1.getCars().size() > 0) { john1.setLabel("John has a car"); } else { john1.setLabel("John doesn't have a car"); } em1.flush();
Optimistic write lock
Person john1 = em1.find(Person.class, id); //T1 reads Person("John") Person john2 = em2.find(Person.class, id, LockModeType.OPTIMISTIC_FORCE_INCREMENT); //T2 reads Person("John") //John gets a mercedes Car mercedes = new Car(); mercedes.setPerson(john2); em2.persist(mercedes); john2.getCars().add(mercedes); em2.getTransaction().commit(); // T2 commits //T1 doesn't know about John's new car. john1 in stale state. That's ok though because proper locking won't let us save wrong information about John. if (john1.getCars().size() > 0) { john1.setLabel("John has a car"); } else { john1.setLabel("John doesn't have a car"); } em1.flush(); // OptimisticLockException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect)
Although there is a following remark in the JPA spec, Hibernate and EclipseLink behaves nicely and don't use it.
For versioned objects, it is permissible for an implementation to use LockMode- Type.OPTIMISTIC_FORCE_INCREMENT where LockModeType.OPTIMISTIC was requested, but not vice versa.
Normally you would never use the lock() API for optimistic locking. JPA will automatically check any version columns on any update or delete.
The only purpose of the lock() API for optimistic locking is when your update depends on another object that is not changed/updated. This allows your transaction to still fail if the other object changes.
When to do this depends on the application and the use case. OPTIMISTIC will ensure the other object has not been updated at the time of your commit. OPTIMISTIC_FORCE_INCREMENT will ensure the other object has not been updated, and will increment its version on commit.
Optimistic locking is always verified on commit, and there is no guarantee of success until commit. You can use flush() to force the database locks ahead of time, or trigger an earlier error.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With