Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Best practice for getting RDS password to docker container on ECS

Tags:

amazon-ecs

I am using Postgres Amazon RDS and Amazon ECS for running my docker containers. The question is. What is the best practice for getting the username and password for the RDS database into the docker container running on ECS?

I see a few options:

  • Build the credentials into docker image. I don't like this since then everyone with access to the image can get the password.
  • Put the credentials in the userdata of the launch configuration used by the autoscaling group for ECS. With this approach all docker images running on my ECS cluster has access to the credentials. I don't really like that either. That way if a blackhat finds a security hole in any of my services (even services that does not use the database) he will be able to get the credentials for the database.
  • Put the credentials in a S3 and control the limit the access to that bucket with a IAM role that the ECS server has. Same drawbacks as putting them in the userdata.
  • Put the credentials in the Task Definition of ECS. I don't see any drawbacks here.

What is your thoughts on the best way to do this? Did I miss any options?

regards, Tobias

like image 854
Tobias Tobiasen Avatar asked May 10 '16 08:05

Tobias Tobiasen


People also ask

What is the best practice way of providing permissions to running containers on ECS?

Use awsvpc network mode and give each service its own security group. We recommend that you use awsvpc network mode for tasks on Amazon EC2. This allows each task to have a unique IP address with a service-level security group.

How do I pass Docker credentials?

CodeBuild uses the CodeBuild service role as the default AWS credential in the build container and Docker runtime. Export the AssumeRole credentials as environment variables. Then, pass these variables into the Docker runtime by using the --build-arg parameter for docker build.

Can you ssh into ECS container?

Amazon ECS container instances have no password, and you use a key pair to log in using SSH.


2 Answers

Building it into the container is never recomended. Makes it hard to distribute and change.

Putting it into the ECS instances does not help your containers to use it. They are isolated and you'd end up with them on all instances instead of just where the containers are that need them.

Putting them into S3 means you'll have to write that functionality into your container. And it's another place to have configuration.

Putting them into your task definition is the recommended way. You can use the environment portion for this. It's flexible. It's also how PaaS offerings like Heroku and Elastic Beanstalk use DB connection strings for Ruby on rails and other services. Last benefit is it makes it easy to use your containers against different databases (like dev, test, prod) without rebuilding containers or building weird functionality

like image 157
Marc Young Avatar answered Sep 18 '22 01:09

Marc Young


The accepted answer recommends configuring environment variables in the task definition. This configuration is buried deep in the ECS web console. You have to:

  1. Navigate to Task Definitions
  2. Select the correct task and revision
  3. Choose to create a new revision (not allowed to edit existing)
  4. Scroll down to the container section and select the correct container
  5. Scroll down to the Env Variables section
  6. Add your configuration
  7. Save the configuration and task revision
  8. Choose to update your service with the new task revision

This tutorial has screenshots that illustrate where to go.

Full disclosure: This tutorial features containers from Bitnami and I work for Bitnami. However the thoughts expressed here are my own and not the opinion of Bitnami.

like image 38
Neal Avatar answered Sep 22 '22 01:09

Neal