I have encountered a concurrency problem even after using the serializable isolation level in the spring transaction. My use-case is that the user will provide config to be updated in the database in the below format.
{A1: [B1, B2, B3]}
I have to save this in the entities below.
A {
@OneToMany
List<B> bList;
}
B {
@ManyToOne
A a;
Boolean isDeleted;
}
When there are concurrent requests to save config, more B's are getting inserted than expected. Please refer to the scenario below.
Initial enitites in database: A1 -> []
Transaction 1 - given config {A1: [B2]}
Reads A1 -> []
Insert B2
Transaction 2 - given config {A1: [B3]}
Reads A1 -> []
Insert B3
Final in database: A1 -> [B2, B3] when expected is either A1 -> [B2, B3-deleted] or A1 -> [B2-deleted, B3].
I am not able to find a proper solution to this problem even after a lot of research. According to this article (https://sqlperformance.com/2014/04/t-sql-queries/the-serializable-isolation-level), this situation is always possible when using SQL Server as the order of operations is one of the valid serializations.
This is best handled by introducing a version column for optimistic locking. There is no need for using the SERIALIZABLE isolation level. Just use
A {
@Version
long version;
@OneToMany
List<B> bList;
}
and make sure you use LockModeType.OPTIMISTIC_FORCE_INCREMENT
when loading the A
. This way, the "serialization" will be based on a lock of your so called "aggregate root" which is A
.
By doing so, one transaction will succeed and the other will fail because at the end of each transaction, the version column would be incremented only if the value didn't change in the meantime. If it changes in the meantime, it will rollback one of the two transactions and you will see an OptimisticLockException.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With