Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Application level queueing / website traffic management

Tags:

We have a website application that we expect to receive incredibly high traffic at several points throughout the year. We currently have some third party load balancing software which redirects users to a 'holding' page during busy periods, to prevent our web application servers from being suffocated by the amount of requests coming in.

Going forward we would like to have more control over this process and implement a virtual queue of some kind. The current load balancer has no queuing functionality, but simply allows traffic through based on a rate limit. This is random and is pot luck on when you refresh the page (or get auto refreshed).

I've done some reading online about this but found little implementation detail on how to implement a very basic virtual HTTP request queue. There are of course companies that offer this as a fully fledged service such as queue-it and Netprecept but these seem overkill for our current needs (and are very expensive).

The web application in question is written in ASP.Net MVC. Bearing in mind that we do not need advanced features like 'queue priority' etc. at the moment, I have created a very basic proof-of-concept using a static queue manager class, using the ConcurrentQueue<T> etc. but am wondering if this is a valid, scalable approach? Is this something that can be part of the main application layer? Or should it be kept separate? Does any one have any technical know-how on how to implement this kind of feature into an ASP.Net MVC app?


EDIT: thanks for the answers so far. Most of the answers seem to go into a lot of detail about caching. This is already (very) heavily employed on our website, using ASP.Net web caching, caching full page requests at the load balancer level and object caching using AppFabric.

The reason for the ability to manage a queue is because the process is very database-write heavy. We're effectively creating orders for a certain product via the website. This means these DB transactions are taking things into account like last-minute stock checking etc. This is where the performance issues arise, and this is the reason for wanting to implement a queuing system of some kind.

Throwing more resources at the database server is not a realistic option. I'm really looking for details of technical implementations of a queuing system of this nature (C# or otherwise). Sorry if this wasn't made clear originally.

like image 434
harman_kardon Avatar asked Jul 15 '13 09:07

harman_kardon


People also ask

What is website traffic management?

Website traffic management describes the process of controlling the number and/or type of visitors to a website. Website traffic refers to web users who visit a website (from a desktop or mobile browser) as well as users of native mobile apps.


2 Answers

Are you considering following points while Measuring the Performance of application?

  1. Caching
  2. Sessionless Controllers
  3. AsyncControllers

Output caching :

Perhaps the most useful feature of MVC3 (Performance-Wise) is output caching. The biggest Performance hits actually occur when your application really has to fetch data, do calculations on it and return the data. Output caching can cache these results so they can be returned directly without even touching the database. Especially when executing complex queries this can drop the load on your server significantly (in fact you could drop the load on your server by a whooping 90% by carefully inplementing caching in your web application).

namespace MvcApplication1.Controllers
{
     public class DataController : Controller
     {
          [OutputCache(Duration=10)]
          public string Index()
          {
               return DateTime.Now.ToString("T");    
          }
     }
}

Sessionless controllers :

Controllers with session state disabled provide an optimization for controllers that do not require session state. Stateless controllers are meant for situations where you do not require the concept of a session.

By default the ASP.NET pipeline will not process requests belonging to the same session concurrently. It serialises them, i.e. it queues them in the order that they were received so that they are processed serially rather than in parallel. This means that if a request is in progress and another request from the same session arrives, it will be queued to only begin executing when the first request has finished.

Let's look at an example; a page making 3 asynchronous AJAX requests to the server, with session state enabled(also note that session must actually be used, as ASP.NET is smart enough not to serialise requests if you never use session state, even if it's enabled).

JQuery

$(document).ready(function () {
    //Make 3 concurrent requests to /ajaxtest/test
    for (var i = 0; i < 3; i++) {       
        $.post("/ajaxtest/test/" + i,
            function (data) {                       
                //Do something with data...
            },
            "json");
    }
});

Controller - Action Method

public class AjaxTestController : Controller
{       
    [HttpPost]
    public JsonResult Test(int? id)
    {
        Thread.Sleep(500);
        return Json(/*Some object*/);
    }
}

enter image description here

You can see the effect of serialised requests in the network profile; each request takes roughly 500ms longer than the previous one. So it means we're not getting any benefit from making these AJAX calls asynchronously. Let's look at the profile again with session state disabled for our AjaxTestController (using the [SessionState] attribute).

[SessionState(SessionStateBehavior.Disabled)]
public class AjaxTestController : Controller
{       
    //...As above
}

enter image description here

Much better! You can see how the 3 requests are being processed in parallel, and take a total of 500ms to complete, rather than 1500ms we saw in our first example.

Async-Controllers :

First, controller begins one or more external I/O calls (e.g., SQL database calls or web service calls). Without waiting for them to complete, it releases the thread back into the ASP.NET worker thread pool so that it can deal with other requests.

Later, when all of the external I/O calls have completed, the underlying ASP.NET platform grabs another free worker thread from the pool, reattaches it to your original HTTP context, and lets it complete handling the original request.

enter image description here

How to Measure the Response time under Heavy Traffic?

I copied below content from this link. Because sometime links gets broken so I kept some important part here. Please check this link for more details

To understand how asynchronous controllers respond to differing levels of traffic, and how this compares to a straightforward synchronous controller, you can put create a sample MVC with two controllers. To simulate a long-running external, they both perform a SQL query that takes 2 seconds to complete (using the SQL command WAITFOR DELAY ’00:00:02′) and then they return the same fixed text to the browser. One of them does it Synchronously; the other Asynchronously.

In another example you can check a simple C# console application that simulates heavy traffic hitting a given URL. It simply requests the same URL over and over, calculating the rolling average of the last few response times. First it does so on just one thread, but then gradually increases the number of concurrent threads to 150 over a 30-minute period. If you want to try running this tool against your own site, you can download the C# source code.

The results illustrate a number of points about how asynchronous requests perform. Check out this graph of average response times versus number of concurrent requests (lower response times are better):

enter image description here

To understand this, first I need to tell you that I had set my ASP.NET MVC application’s worker thread pool to an artificially low maximum limit of 50 worker threads. My server actually has a default max threadpool size of 200 – a more sensible limit – but the results are made clearer if I reduce it. As you can see, the synchronous and asynchronous requests performed exactly the same as long as there were enough worker threads to go around. And why shouldn’t they? But once the threadpool was exhausted (> 50 clients), the synchronous requests had to form a queue to be serviced. Basic queuing theory tells us that the average time spent waiting in a queue is given by the formula:

enter image description here

and this is exactly what we see in the graph. The queuing time grows linearly with the length of the queue. (Apologies for my indulgence in using a formula – sometimes I just can’t suppress my inner mathematician. I’ll get therapy if it becomes a problem.) The asynchronous requests didn’t need to start queuing so soon, though. They don’t need to block a worker thread while waiting, so the threadpool limit wasn’t an issue. So why did they start queuing when there were more than 100 clients? It’s because the ADO.NET connection pool is limited to 100 concurrent connections by default.

Hope this should help you.

like image 84
Imad Alazani Avatar answered Sep 21 '22 18:09

Imad Alazani


IIS has it's own queues which you can configure for each application pool. Here is the link containing usefull tips on performance here.

I wouldn't recomment mixing application code and code which deals with low technical performance optimisations. Keep it separate as it will keep you code mantainable.

like image 36
Alexandr Mihalciuc Avatar answered Sep 23 '22 18:09

Alexandr Mihalciuc