I have a problem with two separate transactions that are flushed to the database in a reverse order to that in which they're actually executed.
Here's the business case: there's a RemoteJob-RemoteJobEvent one-to-many relation. Every time a new event is created, a timestamp is obtained and set in the lastModified field of both RemoteJob and RemoteJobEvent, and two records are persisted (one update + one insert).
Here's what it looks like in the code:
class Main {
@TransactionAttribute(TransactionAttributeType.REQUIRED)
public void mainMethod(...) {
RemoteJob job = remoteJobDAO.findById(...);
// ...
addEvent(job, EVENT_CODE_10);
// Here the separate transaction should have ended and its results
// permanently visible in the database. We refresh the job then
// to update it with the added event:
remoteJobDAO.refresh(job); // calls EntityManager.refresh()
// ...
boolean result = helper.addEventIfNotThere(job);
}
// Annotation REQUIRES_NEW here to enforce a new transaction; the
// RemoteJobDAO.newEvent() has REQUIRED.
@TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
public void addEvent(RemoteJob job, RemoteJobEvent event) {
remoteJobDAO.newEvent(job, event);
}
}
class Helper {
@TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
public boolean addEventIfNotThere(RemoteJob job) {
// This loads the job into the persistence context associated with a new transaction.
job = remoteJobDAO.findById(job.getId());
// Locking the job record – this method is using as a semaphore by 2 threads,
// we need to make sure only one of them completes it.
remoteJobDAO.lockJob(job, LockModeType.WRITE);
// Refreshing after locking to be certain that we have current data.
remoteJobDAO.refresh(job);
// ... here comes logic for checking if EVENT_CODE_11 is not already there
if (/* not yet present */) {
remoteJobDAO.newEvent(job, EVENT_CODE_11);
}
return ...; // true - event 11 was there, false - this execution added it.
}
}
To sum up: in mainMethod()
we are already in a transaction context. We then hang it to spawn a new transaction to create EVENT_CODE_10 in the method addEvent()
. After this method returns, we should have its results committed and visible for everyone (but the context of mainMethod()
needs to be refreshed). Finally, we step into the addEventIfNotThere()
method (a new transaction again), it turns out nobody added the EVENT_CODE_11, so we do it and return. As a result, two events should be in the database.
Here's the trouble: OpenJPA seems to flush both event-adding transactions no sooner than after the addEventIfNotThere()
completes! What's more, it does it in a wrong order, and version column values clearly show that the second transaction has no information of the results of the preceding one, even though the first one should have been committed (note the log order, lastModified field values and event codes):
2011-07-08T10:45:51.386 [WorkManager.DefaultWorkManager : 7] TRACE [openjpa.jdbc.SQL] - <t 2080472065, conn 1753966731> executing prepstmnt 1859546838 INSERT INTO RemoteJobEvent (id, eventCode, lastModified, version, remotejobid) VALUES (?, ?, ?, ?, ?) [params=(long) 252, (short) 11, (Timestamp) 2011-07-08 10:45:51.381, (int) 1, (long) 111]
2011-07-08T10:45:51.390 [WorkManager.DefaultWorkManager : 7] TRACE [openjpa.jdbc.SQL] - <t 2080472065, conn 1753966731> executing prepstmnt 60425114 UPDATE RemoteJob SET lastModified = ?, version = ? WHERE id = ? AND version = ? [params=(Timestamp) 2011-07-08 10:45:51.381, (int) 3, (long) 111, (int) 2]
2011-07-08T10:45:51.401 [WorkManager.DefaultWorkManager : 7] TRACE [openjpa.jdbc.SQL] - <t 2080472065, conn 815411354> executing prepstmnt 923940626 INSERT INTO RemoteJobEvent (id, eventCode, lastModified, version, remotejobid) VALUES (?, ?, ?, ?, ?) [params=(long) 253, (short) 10, (Timestamp) 2011-07-08 10:45:51.35, (int) 1, (long) 111]
2011-07-08T10:45:51.403 [WorkManager.DefaultWorkManager : 7] TRACE [openjpa.jdbc.SQL] - <t 2080472065, conn 815411354> executing prepstmnt 1215645813 UPDATE RemoteJob SET lastModified = ?, version = ? WHERE id = ? AND version = ? [params=(Timestamp) 2011-07-08 10:45:51.35, (int) 3, (long) 111, (int) 2]
This, of course, produces an OptimisticLockException
-- it acts the same way in two environments: test with Apache Derby/Tomcat/Atomikos Transaction Essentials, and target with WebSphere 7.0/Oracle 11.
My question is: how is this possible, that transaction borders are not respected? I understand that a JPA provider is free to choose SQL ordering within one transaction, but it cannot reorder whole transactions, can it?
Some more info about our environment: the presented code is a part of a Spring 3.0.5 JMS message handler (DefaultMessageListenerContainer); Spring is also used for bean injections, but the annotation-based transaction management uses the system transaction manager (Websphere's/Atomikos, as above), that's why EJB3 and not Spring transactional annotations are used.
I hope this raises some interest, in which case I'll gladly supply more info, if needed.
I fell victim to not having read up on how Spring proxies work, the ones responsible for annotation-based transaction support.
It turns out the addEvent
's REQUIRES_NEW annotation is ignored when the method is called from within the same class. The Spring transactional proxy does not come to work in this case, so the code runs in the current transaction — which is totally wrong, as it ends (long) after the call to helper.addEventIfNotThere()
completes. The latter method, on the other hand, is called from another class, so the REQUIRES_NEW really starts and commits as a separate transaction.
I moved the addEvent()
method to a separate class and the problem disappeared. Another solution could be changing the way the <tx:annotation-driven/>
configuration works; more info here: Spring Transaction Management reference.
Another option would be to weave Spring's AnnotationTransactionAspect using AspectJ as described in section 11.5.9 of the Spring documentation
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With