I have an application I'd like to test-proof against possible problems related to Hibernate and/or persistence.
What other problems? How do I reproduce them (literally)? And how do you recover from them?
To make it clear: I'm talking about multi-threaded cluster environment (the most complex one).
My one:
org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect)
Reproduce:
Handle: Not sure...
This problem occurs when Hibernate performs 1 query to select n entities and then has to perform an additional query for each of them to initialize a lazily fetched association. Hibernate fetches lazy relationships transparently so that this kind of problem is hard to find in your code.
Java classes whose objects or instances will be stored in database tables are called persistent classes in Hibernate. Hibernate works best if these classes follow some simple rules, also known as the Plain Old Java Object (POJO) programming model.
Lazy loading is one the big issues you'll encounter, especially if you follow a standard DAO pattern. You'll end up with lazy loaded collections, but upon coming out of your DAO layer, spring (or something else if you're not using spring) might close the session.
public class MyDaoImpl implements MyDao {
@Override
@Transactional
public void save(MyObject object) { ... }
}
In this case, when the call to "save" completes, spring will close your session if you are not within another transaction. As a result, any calls to lazily loaded objects will throw a LazyInitializationException.
The typical way to handle this is to bind a session to the current thread. In webapps, you can do this easily with the OpenSessionInViewFilter. For command line, you'll probably need to write a utility method which creates a session, binds to the current thread, and then unbinds when you're done. You can find examples of this all over the web.
And on the subject of collections, if you use the "update" method (again something you'd typically do with a standard DAO pattern), you have to be careful not to replace collection instances, but rather you should manipulate the collection that's already in place. Otherwise, hibernate will have a hard time figuring out what needs to be added/removed/updated.
The problem you have observed is one of concurrent modification of data. Hibernate has many possible solutions for dealing with this.
Essentially, the problem is that two threads (or two machines in your cluster) are acting on the same piece of data at the same time. Consider this example:
machine 1: reads the data and returns it for editing somewhere else
machine 2: also reads the data for modification
machine 1: updates the data and commits.
machine 2: tries to do an update and commit.
What will happen when the second machine tries to commit its changes? Hibernate will see that the data has changed while machine 2 was working on the data. That is, machine 2's update is on stale data. Hibernate can't always merge the two changes (nor is that always desired behaviour), so it rejects the second update by throwing org.hibernate.StaleObjectStateException
As I mentioned above, Hibernate gives you many options to solve this problem. The simplest, perhaps, is to add a version field using @Version
on your data objects. Hibernate will automatically maintain the "version" of the data. Whenever an update takes place, the version will be changed automatically by Hibernate. Your job is the check that the version hasn't changed between when you read the data and when you update the data. If they don't match, you can do something to handle the problem (i.e. tell the user). There are some more sophisticated techniques for preventing concurrent updates, but this is the simplest.
Fetching too much data is probably the biggest problem you can have when you use an ORM tool because it makes it very easy to load way more data than necessary. This problem does not replicate in dev/test scenarios if the amount of test data is rather small, and once data starts to accumulate in production, the data access layer can get exponentially slower.
There are many problems that may occur:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With