Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Docker - Advice on setup for web app with Redis, Postgres, ElasticSearch, NGINX, Workers and multiple ruby applications

Tags:

docker

I am just really getting into Docker. I want to put my existing application infrastructure into containers to provide a consistent and isolate environment, and easier deployment.

My Setup

There are a number of services/daemons that I are running (Redis, ES, PG, NGINX) as well as a few workers (need to talk to PG and Redis). I have 3 ruby web application services and a faye service which all need to talk to Redis, PG and ES. NGINX will need to reverse proxy to the applications.

Container strategy

First thing I want to know is which strategy would you use with docker and these services.

  • Would you create a (e.g. ubuntu) container for each service and then start them up with the appropriate tunnels (-link) to the containers?
  • Would you bundle services on one container and your applications on another?
  • or, would you create one massive container?

Dockerfile

Would/could you make a single Dockerfile for all containers, or split them up? i.e. Redis-Dockerfile, Web01-Dockerfile etc.

Development vs Production

In Development I want to have file changes instantly update on the containers (i.e. path mounted in the containers from the host FS). The mount point could differ from developer to developer. How would you set this up?

In production, I can either clone the application repos on the host machine and mount in the VMs or I could clone the application code inside the container itself.

I know of the -v flag for mounting volumes, so I'd imagine you can setup some environment variables to make host mount points configurable.

like image 324
Stan Bondi Avatar asked Jan 13 '14 08:01

Stan Bondi


1 Answers

Container strategy: this is a frequently asked question. It really depends what you want to do with your application.

  • If your application will have a high number of deployments (e.g. if your app is a SAAS and you will deploy one new instance per customer), but those deployments are expected to be rather small, then you may want to put everything in a single container, because deployment will be much easier.
  • If your application might be scaled significantly (i.e. if you expect to need multiple front-ends, workers, etc.), you probably want to put each service in a different container, so you can scale each service separately.
  • If your application will have a high number of deployments and has to scale, then you will need multiple containers, and make sure that you use links correctly as well :-)

Dockerfile: you need one Dockerfile per image. So, if you make a "all-in-one" container, it's one Dockerfile; if you split the app in multiple containers with different roles (Redis, DB, Web...) that's as many different Dockerfiles.

Dev vs Prod: it really depends on the language/framework/etc. that you use.

  • Sometimes, you can work on your local machine, and build containers (and test them) every now and then (a bit like you would "test to push to staging", except that it's much faster). This is a good approach if building new containers takes a while (e.g. if you use ADD followed by expensive build/dependency steps).
  • If container builds are fast, you can rebuild+redeploy new containers continuously, each time you changed something.
  • You can also use two slightly different Dockerfiles. Suppose that your source will be in /myapp. In the development Dockerfile, you will declare /myapp to be a VOLUME, and the developer will be expected to bind-mount their local copy of the source to /myapp. In the production Dockerfile, you will use ADD to copy the source to /myapp. There will be also minor differences in the build process.

The last method is not ideal (since it's better when dev and prod environments are as close as possible!) but in some cases (when building a new container is very long) it helps a lot.

like image 123
jpetazzo Avatar answered Oct 06 '22 00:10

jpetazzo