Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Handling Async Request in ASP.NET MVC

I have a ASP.NET MVC3 application that handles time-consuming processes (copying a large file from network). What we want to do is:

  1. User clicks a button to post the form to trigger the process
  2. Application starts a new thread to start copying the file
  3. Application shows a message saying the file-copying process has begun
  4. User can close the browser while the copying processed and finished in the background.

The idea is that the user doesn't need any confirmation on the progress of the process, nor be notified when the process has completed.

We currently let the controller trigger an event in a Windows Service, and use the Windows Service to do the actual work. I'm wondering if there's a better/cleaner way to do this?

like image 796
Jim Avatar asked Mar 28 '12 08:03

Jim


3 Answers

You could use System.Threading.Tasks.Task calling the StartNew method with an Action delegate.

Using those tools your controller would look something like this:

[HttpPost]
public ActionResult DoSomethingLongRunning()
{
   if (ModelState.IsValid)
   {
       Task.Factory.StartNew(() => 
                   fileCopier.CopyFile(CopyFileParameter1, CopyFileParameter2));

       return RedirectToAction("View Indicating Long Running Progress");
   }
   else
   {
        // there is something wrong with the Post, handle it
        return View("Post fallback view");
   }
}

Another option is you could use System.Reactive.Concurrency and the IScheduler interface with TaskPoolScheduler as the concrete implementation to perform the action (possibly injected in the controller constructor.

public ActionResult DoSomethingLongRunning()
{
   if (ModelState.IsValid)
   {
       ISchedulerImplementation.Schedule(new Action(() =>
        {
            fileCopier.CopyFile(CopyFileParameter1, CopyFileParameter2);
        }));
        return RedirectToAction("View Indicating Long Running Progress");
   }
   else
   {
        // there is something wrong with the Post, handle it
        return View("Post fallback view");
   }
}

As a benefit, if you do it this way you can use TestScheduler as the implementation of the interface when you are unit testing.

like image 124
AlexC Avatar answered Oct 06 '22 00:10

AlexC


I think having the Windows Service host your long-running process is the best bet. If it's hosted by IIS, you always risk the app pool it's running in being killed for inactivity.

One related possibility is to host a WCF service inside your Windows Service, and provide an external HTTP or other endpoint for the service. That way your web interface can call a "start" contract method of your WCF service, and potentially other methods if need be.

like image 20
McGarnagle Avatar answered Oct 05 '22 23:10

McGarnagle


I'll post a request to a WCF MSMQ service, hosted in IIS 7 WAS. There is an awesome article how to set it up.

Long running tasks using external resources have a high risk of failure. The biggest mistake developers often make is to assume that hardware and networks have unlimited capacity and are very reliable. It is often not.

It may be problematic, even catastrophic, if the long running process is interrupted by momentarily loss of network connectivity or if the remote server being rebooted. If your long running process includes additional processing, like to unzip the file, or analyze it, you may run further risk of failure if there is not enough memory to perform processing. Users may submit too many requests and there isn't enough resources available to handle the concurrent issues. If you let your ASP.NET MVC application do the processing through async controllers, then you might be surprised when your long running process is interrupted when IIS recycles the working process.

MSMQ 4 does a rather good job to mitigate these risks. If the process fails, you can retry it a couple of times before giving up. You can learn how to set that up here. You can use application specific deadletter queues to handle the case where the process failed after a number of acceptable attempts. This is important for operational staff to diagnose issues. You can also use this scheme to notify a user via email that the process failed (or succeeded), even if the machine where the request originated from is turned off.

Hosting it in IIS rather than a windows service offers additional capabilities. For example, the IIS worker process can be recycled should it become deadlocked or exceed a memory threshold. The latter may be an issue when you are using some native code to perform processing. You can recycle it every four (pick your timeframe) hours. The latter is rather important when working with large blobs of managed memory because over time the large object head gets fragmented so much that it becomes nearly impossible to manage allocate enough memory for another large request. You may find that a WCF service hosted in a Windows Service may suffer of this problem.

In reality it depends on how reliable you want that background process to be. If not, using WCF, MSMQ and IIS may simply be overkill.

like image 32
bloudraak Avatar answered Oct 06 '22 01:10

bloudraak