Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why should a production Kubernetes cluster have a minimum of three nodes? [closed]

Tags:

The first section of the official Kubernetes tutorial states that,

A Kubernetes cluster that handles production traffic should have a minimum of three nodes.

but gives no rationale for why three is preferred. Is three desirable over two in order to avoid a split-brain scenario, to merely allow for greater availability, or to cater to something specific to the internals of Kubernetes? I would have thought a split-brain scenario would only happen with multiple Kubernetes clusters (each having distinct masters) whereas a single cluster should be able to handle at least two nodes, each, perhaps, in their own availability-zone.

like image 759
rjs Avatar asked Feb 17 '18 21:02

rjs


People also ask

What is the minimum number of nodes recommended for a production cluster?

The total number of nodes required for a cluster varies, depending on the organization's needs. However, as a basic and general guideline, have at least a dozen worker nodes and two master nodes for any cluster where availability is a priority.

What is the minimum amount of nodes that a working Kubernetes cluster can have?

A Kubernetes cluster that handles production traffic should have a minimum of three nodes. Masters manage the cluster and the nodes are used to host the running applications. When we deploy applications on Kubernetes we tell the master to start our containers and it will schedule them to run on some node agents.

Why does Kubernetes have 3 master nodes?

A multi-master setup protects against a wide range of failure modes, from a loss of single worker node to the failure of the master node's etcd service. By providing redundancy, a multi-master cluster serves a highly available system for your end users.

How many nodes should a Kubernetes cluster have?

More specifically, Kubernetes is designed to accommodate configurations that meet all of the following criteria: No more than 110 pods per node. No more than 5000 nodes. No more than 150000 total pods.


1 Answers

That means a minimum of 3 master nodes per cluster.

Kubernetes keeps all critical data in etcd, which uses a majority to repair when there is a fault. An instance of etcd runs on each master node. Three is the minimum number of etcds one wants backing a prod-level cluster. Hence, three is the minimum number of masters per cluster.

like image 164
Jonah Benton Avatar answered Sep 29 '22 18:09

Jonah Benton