Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Development and production with docker with multiple sites

Currently I have 3 linode servers:

1: Cache server (Ubuntu, varnish)

2: App server (Ubuntu, nginx, rabbitmq-server, python, php5-fpm, memcached)

3: DB server (Ubuntu, postgresql + pg_bouncer)

On my app-server I have multiple sites (topdomains). Each site is inside a virtualenviroment created with virtualenvwrapper. Some sites are big with a lot of traffic, and some site are small with little traffic.

A typical site consist of python (django), celery (beat, flower) and gunicorn.

My current development pattern now is working inside a staging environment on the app-server and committing changes to git. Then change environment to the production environment and doing a git pull, and a ./manage.py migrate and restarting the process doing sudo supervisorctl restart sitename:, but this takes time! There must be a simpler method!

Therefore it seems like docker could help simplify everything, but I can't decide the best approach for how I could manage all my sites and containers inside each site.

I have looked at http://panamax.io and https://github.com/progrium/dokku, but not sure if one of them fit my needs.

Ideally I would run a development version of each site on my local machine (emulating cache-server, app-server and db-server), do code changes there and test them. When I would see the changes worked, I would execute a command that would do all the heavy lifting and send the changes to the linode servers (I would think mostly the app-server), do all the migration and restarting the project on the server.

Could anyone point me in the right direction as how to achieve this?

like image 996
Tomas Jacobsen Avatar asked Sep 29 '22 20:09

Tomas Jacobsen


1 Answers

I have faced the same problem. I don't claim this is the best possible answer and am interested to see what others have come up with.

There doesn't seem to be any really turnkey solution on Docker yet.

It's also been frustrating that most of the 'Django+Docker' tutorials just focus on a single Django site, so they bundle up the webserver and everything in the same Docker container. I think if you have multiple sites on a server you want them to share a single webserver, but this quickly gets more complicated than presented in the tutorials, which are no longer much help.

Roughly what I came up with is this:

  • using Fig to manage containers and complicated Docker config that would be tedious to type as commandline options all the time
  • sites are Django, on uWSGI+Nginx (no reason you couldn't have others though)
  • I have a git repo per site, plus a git repo for the 'server'
  • separate containers for db, nginx and each site
  • each site container has it's own uWSGI instance... I do some config switching so I can either bring up a 'dev' container with uWSGI as acting standalone web server, or a 'live' container where the uWSGI socket is exposed to the main Nginx container, which then takes over as front-side web server.
  • I'm not sure yet how useful the 'dev' uWSGI servers are, I might switch to just running Django dev server and sharing my local code dir as a volume in the container, so I can edit and get live reloading.
  • In the 'server' repo I have all the shared Dockerfiles, for Nginx server, base uWSGI app etc.
  • In the 'server' repo I have made Fabric tasks to do my deployment (checkout server and site repos on the server, build docker images, run fig up etc).

Speaking of deployment, frankly I'm not much keen on the Docker Registry idea. This seems to mean you have to upload hundreds of megabytes of image file to the registry server each time you want to deploy a new container version. This sucks if you are on a limited bandwidth connection at the time and seems very inefficient.

That's why so far I decided to deploy new code via Git and build the new images on the server. I don't use a Docker Registry at all (apart from the public one for a base Ubuntu image). This seems to go against the grain of Docker practice a bit so I'm curious for feedback.

I'd strongly recommend getting stuck in and building your own solution first. If you have to spend time learning a solution like Dokku, Panamax etc that may or may not work for you (I don't think any of them are really ready yet) you may as well spend that time learning Docker directly... it will then be easier to evaluate solutions further down the line.

I tried to get on with Dokku early on in my search but had to abandon because it's not compatible with boot2docker... which means on OS X you're faced with the 'fun' of setting up your own VirtualBox vm to run the Docker daemon. It didn't seem worth the hassle of this when I wasn't certain I wanted to be stuck with how Dokku works at the end of the day.

like image 91
Anentropic Avatar answered Oct 18 '22 07:10

Anentropic