Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

ASP.Net MVC 3 application failing randomly upon application pool recycle

I have a Windows 2008 R2 server and a ASP.Net 2.0 web app running on the default web site in IIS, using Classic .NET AppPool. Underneath that I have a virtual app running MVC 3 using the ASP.NET v4.0 integrated pipeline AppPool.

Every so often the MVC virtual app fails after the application pools automatically recycle. The fix is to manually recycle the ASP.NET 4.0 AppPool. I only need to recycle once and it always fixes the problem.

The application errors that I receive seem like the errors that you get when assemblies are not loading properly. They consist of NullReferenceException and Object reference not set to an instance of an object for controllers and view models.

The problem is that I cannot reproduce this on demand in order to debug the issue properly. I had thought the order of the application pools recycling might be an issue, so I set the Classic pool to restart every night at 1am, and the Integrated pool to restart at 1:15am. Unfortunately this has not helped.

This answer regarding assemblies being loaded on demand is interesting, however I'm unsure why the error only occurs rarely, and seemingly at random.

Does anyone have an idea of how I would be able to consistently recreate the problem, and/or a potential solution? Thank you.

Update to include example stack trace:

Exception information: 
    Exception type: NullReferenceException 
    Exception message: Object reference not set to an instance of an object.
   at Bookstore.Controllers.BooksController.<>c__DisplayClass78.<Details>b__76(Grade g)
   at System.Linq.Enumerable.FirstOrDefault[TSource](IEnumerable`1 source, Func`2 predicate)
   at Bookstore.Controllers.BooksController.Details(String booktitle)
   at lambda_method(Closure , ControllerBase , Object[] )
   at System.Web.Mvc.ReflectedActionDescriptor.Execute(ControllerContext controllerContext, IDictionary`2 parameters)
   at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary`2 parameters)
   at System.Web.Mvc.Async.AsyncControllerActionInvoker.<>c__DisplayClass37.<>c__DisplayClass39.<BeginInvokeActionMethodWithFilters>b__33()
   at System.Web.Mvc.Async.AsyncControllerActionInvoker.<>c__DisplayClass4f.<InvokeActionMethodFilterAsynchronously>b__49()
   at System.Web.Mvc.Async.AsyncControllerActionInvoker.<>c__DisplayClass37.<BeginInvokeActionMethodWithFilters>b__36(IAsyncResult asyncResult)
   at System.Web.Mvc.Async.AsyncControllerActionInvoker.<>c__DisplayClass25.<>c__DisplayClass2a.<BeginInvokeAction>b__20()
   at System.Web.Mvc.Async.AsyncControllerActionInvoker.<>c__DisplayClass25.<BeginInvokeAction>b__22(IAsyncResult asyncResult)
   at System.Web.Mvc.Controller.<>c__DisplayClass1d.<BeginExecuteCore>b__18(IAsyncResult asyncResult)
   at System.Web.Mvc.Async.AsyncResultWrapper.<>c__DisplayClass4.<MakeVoidDelegate>b__3(IAsyncResult ar)
   at System.Web.Mvc.Controller.EndExecuteCore(IAsyncResult asyncResult)
   at System.Web.Mvc.Async.AsyncResultWrapper.<>c__DisplayClass4.<MakeVoidDelegate>b__3(IAsyncResult ar)
   at System.Web.Mvc.MvcHandler.<>c__DisplayClass6.<>c__DisplayClassb.<BeginProcessRequest>b__4(IAsyncResult asyncResult)
   at System.Web.Mvc.Async.AsyncResultWrapper.<>c__DisplayClass4.<MakeVoidDelegate>b__3(IAsyncResult ar)
   at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()
   at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)
like image 386
ern Avatar asked Nov 11 '22 23:11

ern


1 Answers

After reviewing multiple stack traces, the culprit always seemed to reside with Entity Framework.

This question sounded very similar to what we were finding. We have a similar theory: that there might be a race condition in the order that ASP.Net loads assemblies, resulting in Entity Framework sometimes "breaking". We implemented the answer as mentioned and it has been working so far. The frustrating part is that we were never able to consistently reproduce the problem, so only time will tell if the fix worked.

like image 122
ern Avatar answered Nov 15 '22 12:11

ern