Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

nhibernate deadlocks

I'm using the following code in an ASP.NET page to create a record, then count the records to make sure I haven't exceeded a set limit and rollback the transaction if I have.

using (var session = NhibernateHelper.OpenSession())
using (var transaction = session.BeginTransaction())
{
    session.Lock(mall, LockMode.None);

    var voucher = new Voucher();
    voucher.FirstName = firstName ?? string.Empty;
    voucher.LastName = lastName ?? string.Empty;
    voucher.Address = address ?? string.Empty;
    voucher.Address2 = address2 ?? string.Empty;
    voucher.City = city ?? string.Empty;
    voucher.State = state ?? string.Empty;
    voucher.Zip = zip ?? string.Empty;
    voucher.Email = email ?? string.Empty;
    voucher.Mall = mall;
    session.Save(voucher);

    var issued = session.CreateCriteria<Voucher>()
        .Add(Restrictions.Eq("Mall", mall))
        .SetProjection(Projections.Count("ID"))
        .UniqueResult<int>();

    if (issued >= mall.TotalVouchers)
    {
        transaction.Rollback();
        throw new VoucherLimitException();
    }

    transaction.Commit();
    return voucher;
}        

However, I'm getting a ton of deadlocks. I presume this happens because I'm trying to count the records in a table I just performed an insert on and a lock is still held on the inserted row, causing the deadlock.

  • Can anyone confirm this?
  • Can anyone suggest an fix?

I've tried calling SetLockMode(LockMode.None) on the final query, but that results in a NullReferenceException that I cannot figure out.

Edit: If I run the query before I save the object, it works, but then I'm not accomplishing the goal of verifying that my insert didn't somehow go over the limit (in the case of concurrent inserts).

Edit: I found that using IsolationLevel.ReadUncommited in the session.BeginTransaction call solves the problem, but I'm no database expert. Is this the appropriate solution to the problem or should I adjust my logic some how?

like image 304
Chris Avatar asked Aug 04 '09 20:08

Chris


1 Answers

That design would be deadlock prone - typically (not always) one connection is unlikely to deadlock itself, but multiple connections that do inserts and aggregates against the same table are very likely to deadlock. That's because while all activity in one transaction looks complete from the point of view of the connection doing the work -- the db won't lock a transaction out of "its own" records -- the aggregate queries from OTHER transactions would attempt to lock the whole table or large portions of it at the same time, and those would deadlock.

Read Uncommitted is not your friend in this case, because it basically says "ignore locks," which at some point will mean violating the rules you've set up around the data. I.E. the count of records in the table will be inaccurate, and you'll act on that inaccurate count. Your count will return 10 or 13 when the real answer is 11.

The best advice I have is to rearrange your insert logic such that you capture the idea of the count, without literally counting the rows. You could go a couple of directions. One idea I have is this: literally number the inserted vouchers with a sequence and enforce a limit on the sequence itself.

  1. Make a sequence table with columns (I am guessing) MallID, nextVoucher, maxVouchers
  2. Seed that table with the mallids, 1, and whatever the limit is for each mall
  3. Change the insert logic to this pseudo code:
Begin Transaction
Sanity check the nextVoucher for Mall in the sequence table; if too many exist abort
If less than MaxVouchers for Mall then {
  check, fetch, lock and increment nextVoucher
  if increment was successful then use the value of nextVoucher to perform your insert. 
    Include it in the target table.
}
Error? Rollback
No Error? Commit

A sequence table like this hurts concurrency some, but I think not as much as counting the rows in the table constantly. Be sure to perf test. Also, the [check, fetch, lock and increment] is important - you have to exclusively lock the row in the sequence table to prevent some other connection from using the same value in the split second before you increment it. I know the SQL syntax for this, but I'm afraid I am no nHibernate expert.

For read uncommitted data errors, check this out: http://sqlblog.com/blogs/merrill_aldrich/archive/2009/07/29/transaction-isolation-dirty-reads-deadlocks-demo.aspx (disclaimer: Merrill Aldrich is me :-)

like image 66
onupdatecascade Avatar answered Oct 21 '22 17:10

onupdatecascade