Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Sql Server Service Broker: How to structure Conversations for a simple queue scenario?

I'm a Sql Server Service Broker novice and I'm trying to grasp the best way to set Service Broker up for a (seemingly) simple use case: I want to create a simple work queue, where one application drops work items into the queue, and separate application picks up work items from that queue and processes them. There is no need for the first application to get status messages back from the second. I want the queue to live in a single Sql Server instance.

What most confuses me is how conversations/dialogs relate to this situation. I know you can only send/receive messages in the context of a conversation/dialog, but because there is no back-and-forth chatter between the two applications, I feel lost about when is the correct time to create a new conversation. The two extreme alternatives seem to be:

  • Each time I enqueue a work item, I begin a new conversation. So each conversation ends up having exactly one message in it.
  • At deployment time, I manually create a single infinite-lifespan conversation. When it's time to enqueue a work item, I always send it as part of that single conversation.

What would the consequences of going either of these routes be?

Also, in the first case, it seems like I need to do some END CONVERSATIONs, in order for Sql Server to be able to clean up resources internally. Is there any guidance for when would be the correct place to put these in? (Or might it potentially be better to rely on the conversations timing out eventually?)

like image 403
Chris Avatar asked Aug 12 '09 00:08

Chris


2 Answers

You should start with each work item on its own conversation. The producer (initiator) begins a dialog and send the message describing the work item, then commits. The consumer (target) receives the message (or gets activated), inspects the payload to understand the work item details, executes the work, then ends the dialog and commit. The resulting EndDialog message gets sent back to the initiator service queue, and an activated procedure on the initiator queue responds to it by ending the dialog on the initiator side.

This is the simplest deployment and getting it up and running will ensure you have a sound foundation to build upon. Don't cut corners and end the dialog on the initiator side from when the producer enqueues the work item, this is fire-and-forget and has several draw backs.

If you have high performance requirements (over 200 requests per second) then you'll have to start managing the conversations more explicitly. I have a blog entry on reusing conversations for performance reasons. On the receive side I recommend reading Writing Service Broker Procedures.

I also have a blog entry that pretty much does what you need, albeit it does not schedule work items but instead launches a custom procedure: Asynchronous procedure execution.

If you decide to consume the work items from an activated context, thus leveraging the nice self balancing capabilities of activation, then you need to understand the EXECUTE AS context under which activation occurs.

like image 117
Remus Rusanu Avatar answered Nov 06 '22 04:11

Remus Rusanu


I really like Remus' answer, though it doesn't especially touch on why you might prefer starting a separate conversation per work item, rather than putting all work items in a single conversation. Two notes related to this:

First, putting all work items into a single conversation will probably cause concurrency problems if you have multiple threads/processes processing work items. Service broker worker processes tend to look like this (in pseudocode):

begin transaction
receive top n work items from queue
process work items
commit transaction

(By not committing until work items are successfully processed, you ensure, for example, that if your process dies then the work items it has received but not yet processed won't get dropped from the queue.)

The concurrency problem would arise because service broker is programmed such that each RECEIVE command acquires an exclusive read lock on all messages in the queue that share the same conversation (or conversation group) as the ones that were RECEIVEd. That lock is held until the transaction is committed. (See Conversation Group Locks.) So if all work items in the queue are in a single conversation, then while one worker process is in the "process work items" step, no other worker processes can be doing any work.

A second issue with putting lots of items into a single conversation is that it increases the amount of work items you might lose or have to reprocess in certain error conditions. To describe this properly I defer to Remus; see his Recycling Conversations, especially the part that says "reusing a single dialog to send all your messages [...] is like putting all your eggs in one basket." You might be able to recover from some of these error situations, but it will probably introduce more complexity to your code.

There are probably a few more arguments to be made against using a single conversation for all work items, but I'm not as familiar with with them.

This is not to say the correct solution is always to start a separate conversation for each and every work item. After having read through Remus' posts, though, his advice seems sound; start with one work item per conversation, and then add complexity if required. (But probably in no case should you go to the extreme of putting all messages in a single conversation.)

like image 45
Chris Avatar answered Nov 06 '22 04:11

Chris