Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What SQL Server 2005/2008 locking approach should I use to process individual table rows in multiple server application instances?

I need to develop a server application (in C#) that will read rows from a simple table (in SQL Server 2005 or 2008), do some work, such as calling a web service, and then update the rows with the resulting status (success, error).

Looks quite simple, but things get tougher when I add the following application requisites:

  • Multiple application instances must be running at the same time, for Load Balancing and Fault Tolerance purposes. Typically, the application will be deployed on two or more servers, and will concurrently access the same database table. Each table row must be processed only once, so a common synchronization/locking mechanism must be used between multiple application instances.

  • When an application instance is processing a set of rows, other application instances shouldn't have to wait for it to end in order to read a different set of rows waiting to be processed.

  • If an application instance crashes, no manual intervention should need to take place on the table rows that were being processed (such as removing temporary status used for application locking on rows that the crashing instance was processing).

  • The rows should be processed in a queue-like fashion, i.e., the oldest rows should be processed first.

Although these requisites don't look too complex, I'm having some trouble in coming up with a solution.

I've seen locking hint suggestions, such as XLOCK, UPDLOCK, ROWLOCK, READPAST, etc., but I see no combination of locking hints that will allow me to implement these requisites.

Thanks for any help.

Regards,

Nuno Guerreiro

like image 886
Nuno Guerreiro Avatar asked Jul 04 '12 17:07

Nuno Guerreiro


People also ask

Which lock mode is available in SQL?

Locking hierarchy At the row level, the following three lock modes can be applied: Exclusive (X) Shared (S) Update (U)

Which lock mode increases the concurrency of the application queries?

An exclusive lock (X) in a table means that no other transaction will be able to access the locked table and all queries trying to access that table will be blocked. Therefore, this will reduce system overhead but will increase the probability of concurrency contention.


2 Answers

This is a typical table as queue pattern, as described in Using tables as Queues. You would use a Pending Queue and the dequeue transaction should also schedule a retry in a reasonable timeout. Is not realistically possible to hold on to locks for the duration of the web calls. On success, you would remove the pending item.

You also need to be able to dequeue in batch, dequeuing one-by-one is too slow if you go into serious load (100 and thousands of operations per second). So taking the Pending Queue example from the article linked:

create table PendingQueue (
  id int not null,
  DueTime datetime not null,
  Payload varbinary(max),
  cnstraint pk_pending_id nonclustered primary key(id));

create clustered index cdxPendingQueue on PendingQueue (DueTime);
go

create procedure usp_enqueuePending
  @dueTime datetime,
  @payload varbinary(max)
as
  set nocount on;
  insert into PendingQueue (DueTime, Payload)
    values (@dueTime, @payload);
go

create procedure usp_dequeuePending
  @batchsize int = 100,
  @retryseconds int = 600
as
  set nocount on;
  declare @now datetime;
  set @now = getutcdate();
  with cte as (
    select top(@batchsize) 
      id,
      DueTime,
      Payload
    from PendingQueue with (rowlock, readpast)
    where DueTime < @now
    order by DueTime)
  update cte
    set DueTime = dateadd(seconds, @retryseconds, DueTime)
    output deleted.Payload, deleted.id;
go

On successful processing you would remove the item from the queue using the ID. On failure, or on crash, it would be retries automatically in 10 minutes. One think you must internalize is that as long as HTTP does not offer transactional semantics you will never be able to do this with 100% consistent semantics (eg. guarantee that no item is processed twice). You can achieve a very high margin for error, but there will always be a moment when the system can crash after the HTTP call succeeded before the database is updated, and will cause the same item to be retried since you cannot distinguish this case from a case when the system crashed before the HTTP call.

like image 108
Remus Rusanu Avatar answered Nov 02 '22 17:11

Remus Rusanu


I initially suggested SQL Server Service Broker for this. However, after some research it turns out this is probably not the best way of handling the problem.

What you're left with is the table architecture you've asked for. However, as you've been finding, it is unlikely that you will be able to come up with a solution that meets all the given criteria, due to the great complexity of locking, transactions, and the pressures placed on such a scheme by high concurrency and high transactions per second.

Note: I am currently researching this issue and will get back to you with more later. The following script was my attempt to meet the given requirements. However, it suffers from frequent deadlocks and processes items out of order. Please stay tuned, and in the meantime consider a destructive reads method (DELETE with OUTPUT or OUTPUT INTO).

SET XACT_ABORT ON; -- blow up the whole tran on any errors
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
BEGIN TRAN

UPDATE X
SET X.StatusID = 2 -- in process
OUTPUT Inserted.*
FROM (
   SELECT TOP 1 * FROM dbo.QueueTable WITH (READPAST, ROWLOCK)
   WHERE StatusID = 1 -- ready
   ORDER BY QueuedDate, QueueID -- in case of items with the same date
) X;

-- Do work in application, holding open the tran.

DELETE dbo.QueueTable WHERE QueueID = @QueueID; -- value taken from recordset that was output earlier

COMMIT TRAN;

In the case of several/many rows being locked at once by a single client, there is a possibility of the rowlock escalating to an extent, page, or table lock, so be aware of that. Also, normally holding long-running transactions that maintain locks is a big no-no. It may work in this special usage case, but I fear that high tps by multiple clients will make the system break down. Note that normally, the only processes querying your queue table should be those that are doing queue work. Any processes doing reporting should use READ UNCOMMITTED or WITH NOLOCK to avoid interfering with the queue in any way.

What is the implication of rows being processed out of order? If an application instance crashes while another instance is successfully completing rows, this delay will likely cause at least one row to be delayed in its completion, causing the processing order to be incorrect.

If the transaction/locking method above is not to your satisfaction, another way to handle your application crashing would be to give your instances names, then set up a monitor process that has the capacity to check periodically if those named instances are running. When a named instance starts up it would always reset any unprocessed rows that possess its instance identifier (something as simple as "instance A" and "instance B" would work). Additionally, the monitor process would check if the instances are running and if one of them is not, reset the rows for that missing instance, enabling any other instances to run. There would be a small lag between crash and recovery, but with proper architecture it could be quite reasonable.

Note: The following links should be edifying:

  • info about XLOCK
  • Tables as Queues
like image 32
ErikE Avatar answered Nov 02 '22 16:11

ErikE