Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How are people solving app pool recycle issues on deployment with large apps?

Currently after a build/deployment of our app (58 projects, large asp.net MVC 3 front end) takes ~15-20secs to load as it goes through the whole 'recycling the app pool' (release configuration).

We do have a web farm if that alters people's answers, but the question really is:

What are people doing in large scale applications where a maintenance window isn't viable (we're a 24/7 very active website) to minimize that initial 'first hit' on the app pool recycle after a deploy?

We've used a number of tools to analyze that startup time and there doesn't really seem to be any way to bring it down so what I'm looking for are what techniques do people employ in order to minimize the impact of a large application deploy affecting users.

like image 311
Terry_Brown Avatar asked Nov 16 '11 12:11

Terry_Brown


People also ask

What happens when IIS app pool is recycled?

What is application pool recycling in IIS? Recycling means that the worker process that handles requests for that application pool is terminated and a new one is started. This is generally done to avoid unstable states that can lead to application crashes, hangs, or memory leaks.

What triggers app pool recycle?

Here are just some of the reasons that cause applications to restart: Application pool(s) recycle. The worker process hosting your applications can recycle due to a configuration change, time limit, idle timeout, excessive memory usage, and many other reasons.

How long does it take to recycle app pool?

By default, an IIS application pool (or “AppPool”) recycles on a regular time interval of 1740 minutes, or 29 hours. One reason for this time interval is that application pools don't recycle at the same moment every day (every day at 07.00 for example).


2 Answers

By default - if you change 15 files in an ASP.NET application at once (even via FTP) then the app pool is automatically recycled. You can change the number of files but as soon as web.config and bin files are changed then it needs to recycle. So in my opinion the ideal solution for an environment like yours would be as follows:

4 web servers (this is an arbitrary number) each server has a status.aspx that the load balancer looks at - use TeamCity to take 2 of these servers "off line" (off the load balancer) and wait 20 seconds for the traffic to filter across. A distributed cache will help keep user experience problems

Use TeamCity to deploy to those 2 servers - run your automated tests etc. and once you are happy put those back into the farm and take the other 2 offline and deploy to those

This can all be scripted / automated. The only issue with this is any schema changes that are not backwards compatible may not allow running the new version site in parallel with old version of the site for the 20 seconds for the load balancer to kick back in

This is good old fashioned Canary Releasing - there are some patterns here http://continuousdelivery.com/patterns/ to help take into consideration. Id also suggest a copy of that continuous delivery book - its like a continuous delivery bible and has got me out of a few situations :)

like image 73
stack72 Avatar answered Sep 23 '22 19:09

stack72


At the very base you could run a tinyget script against the application after completion of deployment which will "warm up" the application however if a customer hits your site before the script can run, they will still face a delay. What do you currently have in place, what post deployment steps do you have in place?

In a farm environment you could stage deployments too, so take one server out of load balance, update it and then bring that online after deployment and take the other out, complete the deployment and then reintroduce into the farm. How is your SQL Server setup - clustered?

like image 34
Andrew Westgarth Avatar answered Sep 22 '22 19:09

Andrew Westgarth