Does anyone know what the difference between Automatic Load-based Scaling vs having explicit auto scaling groups on OpsWorks is?
this: http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-autoscaling-loadbased.html
vs https://aws.amazon.com/blogs/devops/auto-scaling-aws-opsworks-instances/
With load-based
instances, how does one add one to a target group?
Can you have multiple auto scaling groups in one layer of OpsWorks?
I’m looking at going with an ALB to route our traffic, which cannot act as an independent layer in Opsworks.
So I would need to pipe requests to 1 auto scaling group for one type of requests and the rest to the other other auto scaling group.
I just am not sure what load-based instances are and am perplexed by them not providing a default number of machines to start with.
Which one should I use for ALB routing traffic between the two groups?
While load balancing will re-route connections from unhealthy instances, it still needs new instances to route connections to. Thus, auto scaling will initiate these new instances, and your load balancing will attach connections to them.
Key differences in Amazon EC2 Auto Scaling vs. AWS Auto Scaling. Overall, AWS Auto Scaling is a simplified option to scale multiple Amazon cloud services based on utilization targets. Amazon EC2 Auto Scaling focuses strictly on EC2 instances to enable developers to configure more detailed scaling behaviors.
Like CloudFormation, you can use OpsWorks to deploy AWS resources. However, OpsWorks automates the initial deployment of applications, as well as the ongoing changes to the operating system and application infrastructure. Both Puppet and Chef can also control the deployment of AWS infrastructure.
OpsWorks is a configuration management tool that utilises Chef to configure your infrastructure. OpsWorks utilises a different approach when it comes to scaling out than a auto-scaling group.
Unlike an auto-scaling group, you have these instances pre-defined on your OpsWorks stack (layer) and they are being launched when a certain metric (threshold) is triggered (CloudWatch data: CPU, memory, load... etc).
OpsWorks will not spawn (create) any new instances, but will only be capable of starting instances you have previously created and set them as load-based instances. This is also only available for OpsWorks and cannot be used for any other service outside of OpsWorks.
AWS EC2 auto-scaling actually can launch very large number of instances (instances which do not need to be created beforehand) into your AWS environment, and same as the OpsWorks load-based scaling, can be triggered by CloudWatch alarms (CPU, memory, Load... etc).
Auto-scaling is not by default available on OpsWorks, and there is no build in way to have an auto-scaling group associated with your OpsWorks stack, but it's possible with a bit of work. Read about it here.
Let me divide the answer for you.
Does anyone know what the difference between Automatic Load-based Scaling vs having explicit auto scaling groups on OpsWorks is?
Automatic Load-based Scaling:
Amazon Opsworks Service provides you the the feature, automatic load-based scaling where you can add instances to your layer in stack and set the auto scaling configuration policies directly. Load based scaling scales up or down the instances based upon the load you have set to handle. You need to set the threshold, using the parameters and define the scaling policies.
Explicit Auto Scaling groups on OpsWorks:
Amazon Opsworks Service allows you to add existing instances to your layer in stack. Which means You can set an autoscaling launch configuration and set the scale up and scale down events based on the load. Then create an Autoscaling group and launch instances in it. Then you can go to Opsworks and add these existing instances to your layer in stack. So when the load increases or decreases more or less than the threshold set, the Autoscaling group handles the scaling.
With load-based instances, how does one add one to a target group?
Once you have the Load based Instances Ready either you have launched then directly from Automatic Load-based Scaling in Opsworks or Explicitly using Auto Scaling groups on OpsWork, you can go to Application Load balancer in EC2 Console and configure with necessary configurations and then register the load based instances you have just created with ALB in Register targets TAB.
Can you have multiple auto scaling groups in one layer of OpsWorks?
Yes, you can have multiple auto scaling groups in one layer of OpsWorks.
Which one should I use for ALB routing traffic between the two groups?
You can use any of the group.
so that you can pipe requests to 1 auto scaling group for one type of requests and the rest to the other other auto scaling group.
Please Refer Autoscaling once.
I just am not sure what load-based instances are
Load Based Instances are the instances which are configured with Load based scaling configuration. You need to set the threshold,configuration and the events to define when to scale up and scale down. EX: Suppose, If you have 5 instances running at initial stage and as you want your application to be running even your load increases to minimize your downtime, you will set autoscaling configuration such that if average CPU utilization of instances increase more than 70% launch 2 more instances. You can set scale up and scale down on many more factors.
Hope it Helps:)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With