Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to create a local development environment for Kubernetes?

Update (2016-07-15)

With the release of Kubernetes 1.3, Minikube is now the recommended way to run Kubernetes on your local machine for development.


You can run Kubernetes locally via Docker. Once you have a node running you can launch a pod that has a simple web server and mounts a volume from your host machine. When you hit the web server it will read from the volume and if you've changed the file on your local disk it can serve the latest version.


We've been working on a tool to do this. Basic idea is you have remote Kubernetes cluster, effectively a staging environment, and then you run code locally and it gets proxied to the remote cluster. You get transparent network access, environment variables copied over, access to volumes... as close as feasible to remote environment, but with your code running locally and under your full control.

So you can do live development, say. Docs at http://telepresence.io


The sort of "hot reload" is something we have plans to add, but is not as easy as it could be today. However, if you're feeling adventurous you can use rsync with docker exec, kubectl exec, or osc exec (all do the same thing roughly) to sync a local directory into a container whenever it changes. You can use rsync with kubectl or osc exec like so:

# rsync using osc as netcat
$ rsync -av -e 'osc exec -ip test -- /bin/bash' mylocalfolder/ /tmp/remote/folder

Another great starting point is this Vagrant setup, esp. if your host OS is Windows. The obvious advantages being

  • quick and painless setup
  • easy to destroy / recreate the machine
  • implicit limit on resources
  • ability to test horizontal scaling by creating multiple nodes

The disadvantages - you need lot of RAM, and VirtualBox is VirtualBox... for better or worse.

A mixed advantage / disadvantage is mapping files through NFS. In our setup, we created two sets of RC definitions - one that just download a docker image of our application servers; the other with 7 extra lines that set up file mapping from HostOS -> Vagrant -> VirtualBox -> CoreOS -> Kubernetes pod; overwriting the source code from the Docker image.

The downside of this is NFS file cache - with it, it's problematic, without it, it's problematically slow. Even setting mount_options: 'nolock,vers=3,udp,noac' doesn't get rid of caching problems completely, but it works most of the time. Some Gulp tasks ran in a container can take 5 minutes when they take 8 seconds on host OS. A good compromise seems to be mount_options: 'nolock,vers=3,udp,ac,hard,noatime,nodiratime,acregmin=2,acdirmin=5,acregmax=15,acdirmax=15'.

As for automatic code reload, that's language specific, but we're happy with Django's devserver for Python, and Nodemon for Node.js. For frontend projects, you can of course do a lot with something like gulp+browserSync+watch, but for many developers it's not difficult to serve from Apache and just do traditional hard refresh.

We keep 4 sets of yaml files for Kubernetes. Dev, "devstable", stage, prod. The differences between those are

  • env variables explicitly setting the environment (dev/stage/prod)
  • number of replicas
  • devstable, stage, prod uses docker images
  • dev uses docker images, and maps NFS folder with source code over them.

It's very useful to create a lot of bash aliases and autocomplete - I can just type rec users and it will do kubectl delete -f ... ; kubectl create -f .... If I want the whole set up started, I type recfo, and it recreates a dozen services, pulling the latest docker images, importing the latest db dump from Staging env and cleaning up old Docker files to save space.


I've just started with Skaffold

It's really useful to apply changes in the code automatically to a local cluster.

To deploy a local cluster, the best way is Minikube or just Docker for Mac and Windows, both includes a Kubernetes interface.