I have a Dockerfile
/elastic-beanstalk
app in a git
repo that pulls a tarball of the current release of the application from s3
and launches it. This works great the first time I deploy; the Docker container gets built, and the app launches and runs correctly. The problem comes after I make a change to the app, re-upload the tarball to s3
and run eb deploy
.
$ eb deploy
INFO: Environment update is starting.
INFO: Deploying new version to instance(s).
INFO: Successfully built aws_beanstalk/staging-app
INFO: Successfully pulled yadayada/blahblah:latest
INFO: Docker container 06608fa37b2c is running aws_beanstalk/current-app.
INFO: New application version was deployed to running EC2 instances.
INFO: Environment update completed successfully.
But the app has not updated on *.elasticbeanstalk.com
. I'm guessing since the Dockerfile
hasn't changed, docker doesn't rebuild the container (and pull the latest application tarball). I would like to be able to force a rebuild but the eb
tool doesn't seem to have that option. I can force a rebuild from the website console, but obviously that is no good for automation. I am committing each change to git
and I was hoping that eb
would use that to know that a rebuild is necessary but that doesn't seem to make any difference. Am I using docker/elastic-beanstalk in the wrong way? Ideally I want to commit to git
and have beanstalk automagically re-install the app.
Deploy to the Cloud Using the Elastic Beanstalk Console Choose “AWS Cloud9” and “Go To Your Dashboard.” Choose “Services” and “Elastic Beanstalk.” At the top right, choose “Create Application.” Paste flask-app for the “Application Name.” For “Platform,” choose “Docker.” Leave the Docker settings as is.
Elastic Beanstalk is an AWS service that leverages Amazon EC2 and S3 and deploys, manages and scales your web applications for you. It uses managed containers that support Node. js, Java, Ruby, Docker and more.
Your containers are now set up to automatically update whenever you push a new Docker image to your image repository, or when an external maintainer updates an image you're watching.
TLDR: You may be using ContainerDirectory without a HostDirectory or you may need to update the 03build.sh to build with the --no-cache=true flag.
After a bazillion hours later, I finally fixed this with my use case. I am using CodePipeline to run CodeCommit, CodeBuild, and Elastic Beanstalk to create a continuous integration / continuous delivery solution in AWS with docker. The issue I ran into was the CodeBuild was successfully building and publishing new docker images to AWS ECR (EC2 container registry) and EBS was correctly pulling down the new image, but yet the docker image was never getting updated on the server.
After inspecting the entire process of how EBS builds the docker image (there's a really great article here, part 1 and here part 2 that gives an overview), I discovered the issue.
To add to the article, there is a 3-stage process in EBS on the EC2 instances that are spun-up for deploying docker images.
This 3-stage process is a sequence of bash files that are executed which are located in /opt/elasticbeanstalk/hooks/appdeploy/
.
The pre-stage contain the following shell scripts:
The enact stage is where my caching issue was actually occurring. The enact stage consists of:
When I would execute docker run from the image that was generated in Pre stage, 03build.sh, I would see my updated changes. However, when I would execute the 00run.sh shell script, the old changes would appear. After investigating the docker run command, it was executing
`Docker command: docker run -d -v null:/usr/share/nginx/html/ -v /var/log/eb-docker/containers/eb-current-app:/var/log/nginx ca491178d076`
The -v null:/usr/share/nginx/html/
is what was breaking it and causing it not to update. This was because my Dockerrun.aws.json file had
"Volumes": [
{
"ContainerDirectory": "/usr/share/nginx/html/"
}
],
without a referenced host location. As a result, any future changes I made, didn't get updated.
For my solution, I just removed the "Volumes"
array as all of my files are contained in the docker image I upload to ECR. Note: You may need to add the --no-cache to the 03build.sh as well.
The problem with using Docker for CI is that it doesn't act like a script in that it won't rebuild unless the Dockerfile
changes. So you have to put the stuff that needs to be rebuilt every time in a startup wrapper script rather than in the Dockerfile
. So I moved the part that downloads the application tarball into a script that the Dockerfile
installs to the container. Then when the container starts the tarball is downloaded and unpacked and only then can the real application start. This works, and re-deploys now work as expected. Its a bit aggravating to debug the process and leads me to the opinion that using Docker with EB for CI is a bit of a hack.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With