Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Running Kubernetes on vCenter

So Kubernetes has a pretty novel network model, that I believe is based on what it perceives to be a shortcoming with default Docker networking. While I'm still struggling to understand: (1) what it perceives the actual shortcoming(s) to be, and (2) what Kubernetes' general solution is, I'm now reaching a point where I'd like to just implement the solution and perhaps that will clue me in a little better.

Whereas the rest of the Kubernetes documentation is very mature and well-written, the instructions for configuring the network are sparse, largely incoherent, and span many disparate articles, instead of being located in one particular place.

I'm hoping someone who has set up a Kubernetes cluster before (from scratch) can help walk me through the basic procedures. I'm not interested in running on GCE or AWS, and for now I'm not interested in using any kind of overlay network like flannel.

My basic understanding is:

  1. Carve out a /16 subnet for all your pods. This will limit you to some 65K pods, which should be sufficient for most normal applications. All IPs in this subnet must be "public" and not inside of some traditionally-private (classful) range.
  2. Create a cbr0 bridge somewhere and make sure its persistent (but on what machine?)
  3. Remove/disable the MASQUERADE rule installed by Docker.
  4. Some how configure iptables routes (again, where?) so that each pod spun up by Kubernetes receives one of those public IPs.
  5. Some other setup is required to make use of load balanced Services and dynamic DNS.
  6. Provision 5 VMs: 1 master, 4 minions
  7. Install/configure Docker on all 5 VMs
  8. Install/configure kubectl, controller-manager, apiserver and etcd to the master, and run them as services/daemons
  9. Install/configure kubelet and kube-proxy on each minion and run them as services/daemons

This is the best I can collect from 2 full days of research, and they are likely wrong (or misdirected), out of order, and utterly incomplete.

I have unbridled access to create VMs in an on-premise vCenter cluster. If changes need to be made to VLAN/Switches/etc. I can get infrastructure involved.

How many VMs should I set up for Kubernetes (for a small-to-medium sized cluster), and why? What exact corrections do I need to make to my vague instructions above, so as to get networking totally configured?

I'm good with installing/configuring all the binaries. Just totally choking on the network side of the setup.

like image 269
smeeb Avatar asked Sep 30 '15 17:09

smeeb


1 Answers

For a general introduction into kubernetes networking, I found http://www.slideshare.net/enakai/architecture-overview-kubernetes-with-red-hat-enterprise-linux-71 pretty helpful.

On your items (1) and (2): IMHO they are nicely described in https://github.com/kubernetes/kubernetes/blob/master/docs/admin/networking.md#docker-model . From my experience: What is the Problem with the Docker NAT type of approach? Sometimes you need to configure e.g. into the software all the endpoints of all nodes (172.168.10.1:8080, 172.168.10.2:8080, etc). in kubernetes you can simply configure the IP's of the pods into each others pod, Docker complicates it using NAT indirection. See also Setting up the network for Kubernetes for a nice answer.

Comments on your other points: 1.

All IPs in this subnet must be "public" and not inside of some traditionally-private (classful) range.

The "internal network" of kubernetes normally uses private IP's, see also slides above, which uses 10.x.x.x as example. I guess confusion comes from some kubernetes texts that refer to "public" as "visible outside of the node", but they do not mean "Internet Public IP Address Range".

like image 176
Stefan Vaillant Avatar answered Oct 21 '22 09:10

Stefan Vaillant