Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Azure Queues Creation Best Practice and Scale

I’m looking into an architecture for high scale web performance application (theoretical at this point) on Windows Azure and wanted to pick your brains about “Windows Azure Queues (not SB)” and how best to scale/create them.

I’m basically looking at MVC front-end (Web Role), Windows Azure Queues (async message decoupling), Worker Role and blackened SQL DB.

My understanding is that we receive a message on the Web Role and then pass it to a Queue, the Worker Role will poll the queues {do work… e.g SQL DB CRUD operation} and send back a completion message.

What is the best way to handle the Windows Azure queue creation for scale and also passing the messages back and forth via the Web Role and the Worker Role? Is it best to have one queue for sending work e.g. orders and then another queue for notification e.g. status message

I have seen a lot of posts say that you should create your queues outside of your application code, makes sense but how do you scale this with the current Queue limitation “scalability target for a single Windows Azure queue is “constrained” at 500 transactions/sec”?

The MSDN has some great resources about scaling via queues.

• Role instance scaling refers to adding and removing additional web or worker role instances to handle the point-in-time workload. This often includes changing the instance count in the service configuration. Increasing the instance count will cause Windows Azure runtime to start new instances whereas decreasing the instance count will in turn cause it to shut down running instances.

• Process (thread) scaling refers to maintaining sufficient capacity in terms of processing threads in a given role instance by tuning the number of threads up and down depending on the current workload.

In short I’m looking for answers to the below questions:

  1. Queue creation best practice?
  2. How does the Web Role (MVC App) keep track of messages i.e. does it poll a ‘notification’ queue once it has passed the message off, what the best way to handle the message correlation in the Web Role to send back to the client (web consumer)?
  3. Are the scaling options above the best approach or should I be looking at dynamic scaling of queues on the fly (if so how?, I think SB Queues might be better in this case) creation of new queues to circumvent the 500 transactions/sec limitation?

As stated my questions are more theoretical at this moment in time, I want to architect and future proof any solution going forward.

Thanks

like image 520
user1610809 Avatar asked Aug 20 '12 03:08

user1610809


2 Answers

For the question of how do you track responses, I usually do the following:

  1. Web role writes a message to the queue that includes a job ID.
  2. Worker role does the work and writes the result to a table with partition key .
  3. Web role is polling that row of the table. Once it exists, the answer comes back.

Then again, if this is a user-interactive thing, where the user is actually waiting on an open HTTP connection to get the result, you may be better off just using synchronous communication and skipping the queue.

If I were you, I'd at least take a look at using something like 0MQ (ZeroMQ, or its new fork Crossroads I/O), which gives you nice abstractions on top of raw sockets that can be useful for this sort of thing. E.g., web servers send messages via PUSH sockets, worker roles receive via PULL sockets, worker roles publish responses via PUB sockets, and web roles receive the responses via SUB sockets. The PUSH/PULL does load balancing, and the PUB/SUB handles getting the message back to the waiting web role. FYI, this is exactly the messaging architecture and technology used by Mongrel2.

In terms of getting beyond 500 operations per second, you can just create multiple queues and spray messages to them randomly. (Each worker would typically poll all of them.) But keep in mind that a single storage account has a limit of 5,000 operations per second, so after you've created 10 queues, you'll need to create a new storage account to get more guaranteed scalability.

like image 113
user94559 Avatar answered Sep 22 '22 03:09

user94559


I would recommend that you consider using this pattern -IF- you want a fast synchronous (and scalable) solution.

enter image description here

The reason for using that pattern is that the queue allows you to independently scale web and worker roles. It does mean quite a few queues but it will mean you could, for example, have 2 web worker roles and scale up from 6 to 8 worker roles if you needed.

I imagine this could be faster than polling SQL and it would be more resilient than not using a queue.

More details are here: https://msdn.microsoft.com/en-us/library/dn589781.aspx

like image 36
Dessus Avatar answered Sep 21 '22 03:09

Dessus