Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Pushing to multiple EC2 instances on a load balancer

I am attempting to figure out a good way to push out a new commit to a group of EC2 server instances behind a ELB (load balancer). Each instance is running Nginx and PHP-FPM

I would like to perform the following workflow, but I am unsure of a good way to push out a new version to all instances behind the load balancer.

  • Dev is done on a local machine
  • Once changes are ready, I perform a "git push origin master" to push the changes to BitBucket (where I host all my git repos)
  • After being pushed to bitbucket, I would like to have the new version pushed out to all EC2 instances simultaneously.
  • I would like to do this without having to SSH in to each instance (obviously).

Is there a way to configure the remote servers to accept a remote push? Is there a better way to do this?

like image 473
Jonathan Coe Avatar asked Dec 13 '12 21:12

Jonathan Coe


People also ask

Can load balancer send request to all instances?

Expected behavior should be sending request to one of the instances. But when request is sent through curl to the same address, the load balancer works as expected and only sends request to one of the instance, and also cycles through the instances on repeated requests.

Which AWS service helps in balancing load across multiple Amazon EC2 instances?

Elastic Load Balancing automatically distributes your incoming traffic across multiple targets, such as EC2 instances, containers, and IP addresses, in one or more Availability Zones.

How many instances can a load balancer handle?

Network Load Balancer currently supports 200 targets per Availability Zone. For example, if you are in two AZs, you can have up to 400 targets registered with Network Load Balancer. If cross-zone load balancing is on, then the maximum targets reduce from 200 per AZ to 200 per load balancer.


1 Answers

Yes, I do this all of the time (with the same application stack, actually).

  1. Use a base AMI from a trusted source, such as the default "Amazon Linux" ones, or roll your own.

  2. As part of the launch configuration, use the "user data" field to bootstrap a provisioning process on boot. This can be as simple as a shell script that runs yum install nginx php-fpm -y and copies files down from a S3 bucket or do a pull from your repo. The Amazon-authored AMI's also include support for cloud-init scripts if you need a bit more flexibility. If you need even greater power, you can use a change management and orchestration tool like Puppet, Chef, or Salt (my personal favorite).

  3. As far as updating code on existing instances: there are two schools of thought:

    • Make full use of the cloud and just spin up an entirely new fleet of instances that grab the new code at boot. Then you flip the load balancer to point at the new fleet. It's instantaneous and gives you a really quick way to revert to the old fleet if something goes wrong. Hours (or days) later, you then spin down the old instances.
    • You can use a tool like Fabric or Capistrano to do a parallel "push" deployment to all the instances at once. This is generally just re-executing the same script that the servers ran at boot. Salt and Puppet's MCollective also provide similar functionality that mesh with their basic "pull" provisioning.
like image 83
jamieb Avatar answered Oct 12 '22 03:10

jamieb