There has been a lot written about this topic:
I find particularly interesting the last accepted answer:
If you are using InnoDB or any row-level transactional RDBMS, then it is possible that any write transaction can cause a deadlock, even in perfectly normal situations. Larger tables, larger writes, and long transaction blocks will often increase the likelihood of deadlocks occurring. In your situation, it's probably a combination of these.
That would mean that we can never prevent them, but only to deal with them. Is that true? I wonder if you can ever prevent deadlocks on website with 1000 people online who invoke write DB operations.
Googling over the topic does not get any interesting results. Only one I found was this (http://www.coderanch.com/t/415119/ORM/databases/Deadlock-problems-Hibernate-Spring-MS):
public class RestartTransactionAdviser implements MethodInterceptor {
private static Logger log = Logger.getLogger(RestartTransactionAdviser.class);
public Object invoke(MethodInvocation invocation) throws Throwable {
return restart(invocation, 1);
}
private Object restart(MethodInvocation invocation, int attempt) throws Throwable {
Object rval = null;
try {
rval = invocation.proceed();
} catch (Exception e) {
Throwable thr = ExceptionUtils.getRootCause(e);
if (thr == null) {
throw e;
}
if (StringUtils.contains(thr.getMessage(), "deadlock") || StringUtils.contains(thr.getMessage(), "try restarting transaction") || StringUtils.contains(thr.getMessage(),
"failed to resume the transaction")) {
if (attempt > 300) {
throw e;
}
int timeout = RandomUtils.nextInt(2000);
log.warn("Transaction rolled back. Restarting transaction.");
log.debug("Spleep for " + timeout);
log.debug("Restarting transaction: invocation=[" + invocation + "], attempt=[" + attempt + "]");
Thread.sleep(timeout);
attempt++;
return restart(invocation, attempt);
} else {
throw e;
}
}
return rval;
}
}
On the other hand I seriously doubt quality of such solution. Can you please elaborate and explain what would be the best handling of deadlocks? How do deal with deadlocks in banks and enterprise applications?
The Hibernate Session entails a transaction write-behind first level cache. This enables you to post-pone changes up to the last responsible moment, therefore reducing lock acquisition intervals (happening even in READ_COMMITTED isolation level).
It means you have to minimize all transaction time and I can recommend using FlexyPool for such endeavor. You need to make sure all transactions are as short as possible, to reduce locking intervals so to improve scalability.
Locking introduces serial operations, and according to Amdahl's law, scalability is inverse proportional to the total serial operations fraction.
My advice is to first work on reducing transaction intervals. Indexing will reduce query times. ORMs might generate awful queries so make sure your integration tests verify expected queries against actual executed ones.
A tool like p6spy is very handy for timing your queries so make sure you use it too.
When all transactions are as short as possible and you still need more concurrency, you can move to horizontal scalability. You can first start with a synchronous master-slave replication strategy and redirect reads to the node slave while keeping the master for write transactions.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With