We have no continuous integration setup(, yet). But want to deploy very frequently. Once a day or so.
We have a pretty standard Django application with a separate Postgres server. We use normal rented VMs (NO Amazon or Rackspace).
How can we minimize the downtime of our application? Best would be to zero downtime. We thought about a setup with two equal application and two database servers and deploy one app/db server pair after another.
The problem is keeping the data consistant. While one app/db server pair is updating the server pair with the old code can serve users. But if the users write to the db we would lose the data when switching to the updated pair. Especially when we push schema migrations.
How can we handle this? This must be a very common problem but I can't find good answers. How do you handle this problem?
In the case that you have no schema migrations, I'll give you a practical scenario:
Keep two versions of django processes ( A and B ), which you control with, let's say, supervisor. Keep an nginx process in front of your django processes, which forwards all requests to A. So, you upload version B to the server, start the django process B with supervisor, then change your nginx's conf file to point to B, then reload your nginx process..
In the case that you have schema migrations, things get complicated. Your options include:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With