Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

ASP.NET Hang - Generic Dictionary concurrency issues causes GC deadlock

Tags:

In the last month our ASP.NET web app has stopped responding to requests and we have had to reset the app pool to get it back up.

We are having trouble identifying the exact cause of the issue, as there are no errors in the server's event log. The site simply stops responding.

So we have been using WinDbg to try and analyse a memory dump taken when the site was not responding with the help of Tess Ferrandez's blog entries: GC Hangs & High CPU Hang

By running !threadpool I can see the following:

CPU utilization: 81%
Worker Thread: Total: 10 Running: 8 Idle: 2 MaxLimit: 400 MinLimit: 160
Work Request in Queue: 1930
--------------------------------------
Number of Timers: 72
--------------------------------------
Completion Port Thread:Total: 1 Free: 1 MaxFree: 8 CurrentLimit: 0 MaxLimit: 400 MinLimit: 120

According to Tess in the hang blog post, this suggests that we are stuck in the middle of a garbage collection (something that ordinarily takes a matter of nano seconds) because the processor is showing 81% which is what the .NET framework sets it to whilst it's garbage collecting. It gets set to this because no new threads can be spawned when this value is greater than 80%. Also - you can see that there are 1930 requests in the queue even though there are only 10 worker threads out of a possible 400.

By running the !threads command I can see two threads that have their PreEmptive GC set to disabled:

                                   PreEmptive   GC Alloc                Lock
       ID  OSID ThreadOBJ    State GC           Context       Domain   Count APT Exception
  10    1  18bc 00000000001b9710   1008220 Enabled  0000000000000000:0000000000000000 001b3b98     0 Ukn (Threadpool Worker)
  16    2   e68 00000000001c0900      b220 Enabled  0000000000000000:0000000000000000 001b3b98     0 Ukn (Finalizer)
  18    4  1118 00000000001df300   1009220 Enabled  0000000000000000:0000000000000000 001b3b98     0 Ukn (Threadpool Worker)
  19    5  1560 00000000001e9068   100a220 Enabled  0000000000000000:0000000000000000 001b3b98     0 Ukn (Threadpool Worker)
  20    6  11d0 00000000001f14d8      1220 Enabled  0000000000000000:0000000000000000 001b3b98     0 Ukn
  22    9  19d0 000000001bc666d0   1019220 Enabled  0000000000000000:0000000000000000 001b3b98     0 Ukn (Threadpool Worker)
   9    c  13f4 000000001d173600   8008220 Enabled  0000000000000000:0000000000000000 001b3b98     0 Ukn (Threadpool Completion Port)
  24   23  17c4 00000000001a0c40   1009220 Enabled  0000000000000000:0000000000000000 001b3b98     0 Ukn (Threadpool Worker)
  25   24  1088 000000001d205900   1009220 Enabled  0000000000000000:0000000000000000 001b3b98     0 Ukn (Threadpool Worker)
   6   21  1248 000000001d1f6e80       220 Enabled  0000000000000000:0000000000000000 001b3b98     0 Ukn
   4   27   eb0 000000000019df08       220 Enabled  0000000000000000:0000000000000000 001b3b98     0 Ukn
  27   29   39c 000000001bc720f0   1009220 Enabled  0000000000000000:0000000000000000 001b3b98     0 Ukn (Threadpool Worker)
  28   26  17ec 00000000001a7be0   1019220 Enabled  0000000000000000:0000000000000000 001b3b98     0 Ukn (Threadpool Worker)
  29   2c  1040 000000001d1c8160   1009220 Enabled  0000000000000000:0000000000000000 001b3b98     0 Ukn (Threadpool Worker)
   3   2a  1788 000000001d22ed48       220 Enabled  0000000000000000:0000000000000000 001b3b98     0 Ukn
  30   1f   830 000000001bc5b180       220 Enabled  0000000000000000:0000000000000000 001b3b98     0 Ukn
  32   1e   b38 000000001d1995a0   1009220 Enabled  0000000000000000:0000000000000000 001f0d80     2 Ukn (Threadpool Worker)
  33   1d  19fc 000000001d199a98   1019220 Enabled  0000000000000000:0000000000000000 001b3b98     0 Ukn (Threadpool Worker)
  34   1c   c84 000000000019f8a8   1009220 Enabled  0000000008d8d880:0000000008d8d88c 001f0d80     1 Ukn (Threadpool Worker)
  35   1a  12e8 000000001d1a1468   10092a2 Disabled 000000000c88e294:000000000c88e6d0 001f0d80     1 Ukn (Threadpool Worker)
   5   22  1a70 000000001d1a6008       220 Enabled  0000000000000000:0000000000000000 001b3b98     0 Ukn
  36   28  1050 000000001d209f18   1019220 Enabled  0000000000000000:0000000000000000 001b3b98     0 Ukn (Threadpool Worker)
  37   25  19f0 000000001d22f738   1019220 Enabled  0000000000000000:0000000000000000 001b3b98     0 Ukn (Threadpool Worker)
  38   2b  15cc 000000001d29b2d0      b220 Enabled  0000000000000000:0000000000000000 001f0d80     1 Ukn
  39   20  1228 000000001d290a88   1019220 Enabled  0000000000000000:0000000000000000 001b3b98     0 Ukn (Threadpool Worker)
  40   1b   c98 000000001d290420   10092a2 Disabled 0000000010adc094:0000000010addcf8 001f0d80     1 Ukn (Threadpool Worker)
  41   19  15dc 000000001d2a3168   1019220 Enabled  0000000000000000:0000000000000000 001b3b98     0 Ukn (Threadpool Worker)
  43   18  1a64 000000001d2c8cc0       220 Enabled  0000000000000000:0000000000000000 001b3b98     0 Ukn

From what I understand, this is bad and basically means that the garbage collection cannot complete until all threads have this enabled, meaning that the process is left hanging.

There is a slight discrepancy in comparison to the post though which confuses me, in that there isn't a thread marked with "(GC)" - the thread that triggered the garbage collection?

But regardless, because of the two threads set to disabled, I'm thinking this is the problem, and that Tess' hang blog post will go on to tell me the exact cause & solution to this. But unfortunately it alludes to the problem being the 'mixed DLLloading problem' which after looking at the stacks for the disabled threads, neither are trying to load a DLL. For example, here is the kb for thread 40:

1c93dc74 79142280 0ac30260 0bc85820 00000003 mscorlib_ni+0x29f318
1c93dc88 1e6a8c6c 1c93dc94 00000000 00000001 clr!OnHijackScalarTripThread
WARNING: Frame IP not in any known module. Following frames may be wrong.
1c93dca0 1e6791f7 00000003 00000000 00000000 0x1e6a8c6c
1c93e270 791421bb 0c886820 1bac554d 00000000 0x1e6791f7
1c93e284 7916a553 1c93e354 00000001 1c93e320 clr!CallDescrWorker+0x33
1c93e300 7916a6f4 1c93e354 00000001 1c93e320 clr!CallDescrWorkerWithHandler+0x8e
1c93e450 7916a729 1bd17090 1c93e574 1c93e720 clr!MethodDesc::CallDescr+0x194
1c93e46c 7916a749 1bd17090 1c93e574 1c93e720 clr!MethodDesc::CallTargetWorker+0x21
1c93e484 79190c04 1c93e720 ed47daf9 1c93ea18 clr!MethodDescCallSite::Call_RetArgSlot+0x1c
1c93e648 79190f53 1c85ce30 1bd17090 1b8b6834 clr!CallWithValueTypes_RetArgSlotWrapper+0x5c
1c93e930 79191115 1c85ce30 00000000 0c886b80 clr!InvokeImpl+0x621
1c93e9fc 79b3d839 1d1a1468 00000096 1c93ea18 clr!RuntimeMethodHandle::InvokeMethodFast+0x180
1c93ea50 79b3d52c 0ecc3e60 00000096 1038d084 mscorlib_ni+0x2bd839
1c93ea88 79b3c19d 00000000 00000000 0c885530 mscorlib_ni+0x2bd52c
1c93eab0 79ad6d16 00000000 0c885530 0ec452c4 mscorlib_ni+0x2bc19d
1c93ec04 7a045331 00000000 00000000 00000000 mscorlib_ni+0x256d16
1c93ecbc 672d72b5 0c862b08 00000000 00000000 mscorlib_ni+0x7c5331
1c93ece8 672eb90c 00000000 1c93ed18 0c862b08 System_Web_ni+0x1772b5
1c93ed28 67301e9e 1c93ed54 07f81050 0c863014 System_Web_ni+0x18b90c
1c93ed80 672b5998 1b8b2010 1b8e12c4 1c93f208 System_Web_ni+0x1a1e9e
1c93ed9c 67300602 0c862b08 0ec4af84 0c862b08 System_Web_ni+0x155998
1c93edd8 6730034d 0ac37a80 0c861c1c 1c93ee5c System_Web_ni+0x1a0602
1c93ede8 672ff8fd a65adfcd 79145c50 1c93effc System_Web_ni+0x1a034d
1c93ee5c 67935cdc 00000002 6726b530 00000000 System_Web_ni+0x19f8fd
1c93ee98 791425a1 1c93f228 00000002 028f2b9c System_Web_ni+0x7d5cdc
1c93eeac 79208c9b 67935cc0 6726b530 0ac4ea6c clr!COMToCLRDispatchHelper+0x28
1c93ef40 79208d74 6726b530 0ac4ea6c 00000001 clr!BaseWrapper<Stub *,FunctionBase<Stub *,&DoNothing<Stub *>,&StubRelease<Stub>,2>,0,&CompareDefault<Stub *>,2>::~BaseWrapper<Stub *,FunctionBase<Stub *,&DoNothing<Stub *>,&StubRelease<Stub>,2>,0,&CompareDefault<Stub *>,2>+0xfa
1c93ef64 79208dda 1d1a1468 1c93f1f8 1b7affe0 clr!COMToCLRWorkerBody+0xb4
1c93efbc 792098c9 1d1a1468 1c93f1f8 1b7affe0 clr!COMToCLRWorkerDebuggerWrapper+0x34
1c93f1cc 02b3a1aa 1d1a1468 1c93f1f8 a65adfcd clr!COMToCLRWorker+0x614
1c93f1e0 69f5aa4f 00000001 028038c8 00000000 0x2b3a1aa
1c93f544 69f5ab4a 028038c8 69f5aae2 1c93f56c webengine4!HttpCompletion::ProcessRequestInManagedCode+0x1cd
1c93f554 69f5aafe 028038c8 00000000 00000000 webengine4!HttpCompletion::ProcessCompletion+0x4a
1c93f56c 792078af 028038c8 ed47c90d 0000ffff webengine4!CorThreadPoolWorkitemCallback+0x1c
1c93f5bc 79206c5f 1c93f61f 1c93f61e ed47ca9d clr!UnManagedPerAppDomainTPCount::DispatchWorkItem+0x195
1c93f62c 792084c5 00000000 ed47ca25 00000000 clr!ThreadpoolMgr::NewWorkerThreadStart+0x20b
1c93f694 792f76e0 00000000 fff91c00 00000002 clr!ThreadpoolMgr::WorkerThreadStart+0x3d1
1c93ffb8 7d4dfb97 00247430 00000000 00000000 clr!Thread::intermediateThreadProc+0x4b
1c93ffec 00000000 792f7698 00247430 00000000 kernel32!BaseThreadStart+0x34

But then I noticed something interesting. By using the !eestack -ee -short command, I saw that threads 35 & 40 (the ones that with the disabled PreEmptive GC) were both on the same current frame:

Current frame: (MethodDesc 798ee360 +0xab System.Collections.Generic.Dictionary`2[[System.Int32, mscorlib],[System.__Canon, mscorlib]].FindEntry(Int32))

This is what led me to this post & made me think I'd definitely found the problem. But unfortunately, after using the stack to trace back to the code, the Dictionary in question wasn't static - so I'm guessing this can't be the problem.

I'm now pretty much out of ideas - I'm not an expert in analyzing dump files (in fact I only started yesterday :D) so if anyone has any pointers on what I should do next to identify the cause of the problem that would be great.

UPDATE

After running the ~*ekb command suggested by Brian, I found the thread that started the garbage collection, because it has GarbageCollectGeneration in the stack.

1e61e54c 7d4d8c9e 000003c4 00000000 00000000 ntdll_7d600000!NtWaitForSingleObject+0x15
1e61e5bc 791897da 000003c4 ffffffff 00000000 kernel32!WaitForSingleObjectEx+0xac
1e61e5f0 7918981b 000003c4 ffffffff 00000000 clr!CLREvent::CreateManualEvent+0xf6
1e61e640 791896f1 00000000 efb5da31 0019f8a8 clr!CLREvent::CreateManualEvent+0x137
1e61e680 79189712 ffffffff 00000000 00000000 clr!CLREvent::WaitEx+0x126
1e61e694 792a05cc ffffffff 00000000 00000000 clr!CLREvent::Wait+0x19
1e61e6b8 792a04e5 ffffffff 00000000 0019f8f0 clr!SVR::gc_heap::wait_for_gc_done+0x6f
**1e61e6e0 792a424b 00000000 00000001 001bc600 clr!SVR::GCHeap::GarbageCollectGeneration+0x14f**
1e61e714 79229fd2 0019f8f0 00000040 00000000 clr!SVR::gc_heap::try_allocate_more_space+0x184
1e61e730 7922a080 0019f8f0 00000040 00000000 clr!SVR::gc_heap::allocate_more_space+0x30
1e61e750 79188f33 0019f8f0 00000040 00000000 clr!SVR::GCHeap::Alloc+0x55
1e61e76c 7915dd44 00000040 00000000 00000000 clr!Alloc+0x8d
1e61e7ac 7915dd84 0000001a 79169a39 79166166 clr!SlowAllocateString+0x42
1e61e7b4 79169a39 79166166 00000019 efb5d4c9 clr!UnframedAllocateString+0x12
1e61e7b8 79166166 00000019 efb5d4c9 00001f3e clr!StringObject::NewString+0x13
1e61e878 79ac139e 00000000 0019f8a8 0515fcd0 clr!COMNlsInfo::InternalChangeCaseString+0x128
1e61e88c 79ac51b4 0515fcd0 1e61e900 20e59f26 mscorlib_ni+0x24139e
1e61e898 20e59f26 1e61e900 08ce6208 050fd8e8 mscorlib_ni+0x2451b4

I should also mention that our application is running in 32bit mode, on a 64bit server. Unfortunately, when it last became unresponsive we forgot to take the dump in 32bit, meaning that when we load the dump file in WinDbg we need to run the !wow64exts.sw command to analyse. Not sure how much difference this makes (perhaps this is why I can't see the '(GC)' thread) but we can't seem to get the `!clrstack' command to work. We are currently waiting for it to go non-responsive again so we can take the dump in 32bit.

like image 481
David Masters Avatar asked Feb 12 '13 17:02

David Masters


1 Answers

So we eventually got to the bottom of the problem. It turned out to be a combination of the two articles mentioned in my original question:

  • There are two threads trying to access the same generic Dictionary
  • The GC cannot complete because the two threads have PreEmptive GC set to disabled.

In a bit more detail:

Whilst the two threads are in some sort of deadlock, a third thread joins the party wanting to allocate some memory on the heap. This triggers a new GC, but the GC cannot complete because these two threads have their PreEmptive GC set to disabled. Because the GC cannot complete, the IIS process remains in a state where no new worker threads can be spawned for requests, meaning that any subsequent requests are queued. The queues continues to grow, and no responses are returned until the app pool is recycled.

As I mentioned in my original question - we traced the dictionary back to code and it WASN'T a static dictionary, so we assumed this must be a red herring; how else could two threads gain access to the same object? Well it turns out this was made possible because the dictionary was stored in an InProc session.

ASP.NET by default prevents concurrent requests for the same session ID. This is achieved by putting an exclusive lock on the session per request and designed to prevent this exact scenario. However, this is a 'legacy' app that uses the old AjaxPro framework and had been configured to bypass this locking. This allowed concurrent AjaxRequests for the same session to access the same session object.

It turned out that we had recently introduced a bug in our JavaScript that fired off the same AjaxPro request multiple times thus causing the issue.

like image 80
David Masters Avatar answered Oct 26 '22 21:10

David Masters