I am using Hibernate to access an Oracle database.
I think I am having some trouble with Hibernate's first-level (or session) cache. I have tables representing accounts: the ACCOUNT table, the INVOICE table, and the PAYMENTS table. There are procedures defined in the Oracle database so that adding a PAYMENT will automatically update columns in the associated INVOICE and ACCOUNT tables.
The problem I have is when I use Hibernate to do something like the following:
Account account = accountDao.get(accountId);
assertEquals(0.00, account.getBalance());
// Saving a payment will trigger stored procedures that
// will update the account balance.
paymentDao.save(createPaymentForAccount(accountId, 20.00));
account = accountDao.get(accountId);
assertEquals(20.00, account.getBalance());
The final assertion will fail because account.getBalance()
returns 0.00
rather than 20.00
.
I want the second call to accountDao.get(...)
to hit the database, and get the new ACCOUNT object back. But Hibernate appears to return the account object already in its cache (when I inspect the debug output for the lookup call, I see number of objects hydrated: 0
).
I assume that Hibernate is unaware that the database has changed because of the stored procedure call, which is why it uses the object in its cache.
So I began thinking about solutions. One is to remove any ACCOUNT and PAYMENT objects from the hibernate session cache whenever a PAYMENT is saved. This will force a database fetch (with the newly updated values) for any ACCOUNT or INVOICE operation.
I tried the following:
public void save(Payment payment) {
getSession().persist(payment);
getSessionFactory().evict(Invoice.class);
getSessionFactory().evict(Account.class);
}
But the hibernate trace log showed that nothing happened. I think that sessionFactory.evict(...)
operates on the second-level cache, which is not enabled and so there's nothing to evict.
Next I tried evicting all ACCOUNT and INVOICE objects from the session cache by evicting each instance I could find:
public void save(Payment payment) {
getSession().persist(payment);
for (Invoice invoice: lookupInvoices()) { // e.g. "from Invoice" query
getSession().evict(invoice);
}
for (Account account: lookupAccounts()) { // e.g. "from Account" query
getSession().evict(account);
}
}
This seems to work, but is horribly inefficient, because it loads all instances into the hibernate session cache before evicting them, when all I really want to do is evict any current instances in the session.
I cannot see any way of clearing the first-level cache of all objects of a specified type, so what other solutions are available?
You can use session.refresh() method. See 11.3. Loading an object in the documentation.
Why don't you just evict the account object before reading it again from the database ? I would do this:
Account account = accountDao.get(accountId);
assertEquals(0.00, account.getBalance());
// Saving a payment will trigger stored procedures that
// will update the account balance.
paymentDao.save(createPaymentForAccount(accountId, 20.00));
accountDao.getSession().evict(account)
account = accountDao.get(accountId);
assertEquals(20.00, account.getBalance());
Also you have to make sure your are not in a REPEATABLE_READ isolation level. In this isolation mode, Spring guarantees that two reads within the same transaction will always return the same result. As the isolation level takes precedence on your cache eviction, the cache evict will not have any visible effect.
There is a ways to work around this behavior: Declare your transaction isolation level to READ_UNCOMMITTED or READ_COMMITTED. You might run into the famous "Standard JPA does not support custom isolation levels" exception if you are using a standard dialect. In which case you can apply the following workaround: spring 3.3:http://amitstechblog.wordpress.com/2011/05/31/supporting-custom-isolation-levels-with-jpa/ spring 3.2:http://shahzad-mughal.blogspot.com/2012/04/spring-jpa-hibernate-support-for-custom.html
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With