Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Rapid AWS autoscaling

How do you configure AWS autoscaling to scale up quickly? I've setup an AWS autoscaling group with an ELB. All is working well, except it takes several minutes before the new instances are added and are online. I came across the following in a post about Puppet and autoscaling:

The time to scale can be lowered from several minutes to a few seconds if the AMI you use for a group of nodes is already up to date.

http://puppetlabs.com/blog/rapid-scaling-with-auto-generated-amis-using-puppet/

Is this true? Can time to scale be reduced to a few seconds? Would using puppet add any performance boosts?

I also read that smaller instances start quicker than larger ones:

Small Instance 1.7 GB of memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit), 160 GB of instance storage, 32-bit platform with a base install of CentOS 5.3 AMI

Amount of time from launch of instance to availability: Between 5 and 6 minutes us-east-1c

Large Instance 7.5 GB of memory, 4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each), 850 GB of instance storage, 64-bit platform with a base install of CentOS 5.3 AMI

Amount of time from launch of instance to availability:
Between 11 and 18 minutes us-east-1c

Both were started via command line using Amazons tools.

http://www.philchen.com/2009/04/21/how-long-does-it-take-to-launch-an-amazon-ec2-instance

I note that the article is old and my c1.xlarge instances are certainly not taking 18min to launch. Nonetheless, would configuring an autoscale group with 50 micro instances (with an up scale policy of 100% capacity increase) be more efficient than one with 20 large instances? Or potentially creating two autoscale groups, one of micros for quick launch time and one of large instances to add CPU grunt a few minutes later? All else being equal, how much quicker does a t1.micro come online than a c1.xlarge?

like image 470
waigani Avatar asked Jun 17 '12 13:06

waigani


Video Answer


2 Answers

you can increase or decrease the time of reaction for an autoscaller by playing with "--cooldown" value (in seconds). regarding the types of instances to be used, this is mostly based on the application type and a decision on this topic should be taken after close performance monitor and production tuning.

like image 75
Paul Ma Avatar answered Sep 30 '22 23:09

Paul Ma


The time to scale can be lowered from several minutes to a few seconds if the AMI you use for a group of nodes is already up to date. This way, when Puppet runs on boot, it has to do very little, if anything, to configure the instance with the node’s assigned role.

The advice here is talking about having your AMI (The snapshot of your operating system) as up to date as possible. This way, when auto scale brings up a new machine, Puppet doesn't have to install lots of software like it normally would on a blank AMI, it may just need to pull some updated application files.

Depending on how much work your Puppet scripts do (apt-get install, compiling software, etc) this could save you 5-20 minutes.

The two other factors you have to worry about are:

  • How long it takes your load balancer to determine you need more resources (e.g a policy that dictates "new machines should be added when CPU is above 90% for more then 5 minutes" would be less responsive and more likely to lead to timeouts compared to "new machines should be added when CPU is above 60% for more then 1 minute")
  • How long it takes to provision a new EC2 instance (smaller Instance Types tend to take shorted times to provision)
like image 39
Drew Khoury Avatar answered Sep 30 '22 22:09

Drew Khoury