I was playing around with Kubernetes on AWS with t2.medium EC2 instances having 20GB of disk space and one of the nodes ran out of disk space after a few days. It seems to be caused by a combination of Docker images and logs.
From what I've read, Kubernetes has its own Docker GC to manage Docker's disk usage, and log rotation. I'm guessing 20GB is not enough for Kubernetes to self-manage disk usage. What's a safe disk size for a production environment?
The total number of nodes required for a cluster varies, depending on the organization's needs. However, as a basic and general guideline, have at least a dozen worker nodes and two master nodes for any cluster where availability is a priority.
A Kubernetes cluster that handles production traffic should have a minimum of three nodes. Masters manage the cluster and the nodes are used to host the running applications. When we deploy applications on Kubernetes we tell the master to start our containers and it will schedule them to run on some node agents.
Pods are the smallest, most basic deployable objects in Kubernetes. A Pod represents a single instance of a running process in your cluster. Pods contain one or more containers, such as Docker containers. When a Pod runs multiple containers, the containers are managed as a single entity and share the Pod's resources.
Each node in your cluster must have at least 300 MiB of memory. A few of the steps on this page require you to run the metrics-server service in your cluster. If you have the metrics-server running, you can skip those steps. If the resource metrics API is available, the output includes a reference to metrics.k8s.io .
When following the standard installation with the GKE as described in the quickstart guide you'll end up with 3 x n1-standard-1 nodes (see machine types) with 100 GB of storage per node.
Looking at the nodes right after the cluster creation then gives you these numbers for the diskspace:
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 1.2G 455M 767M 38% /
devtmpfs 1.9G 0 1.9G 0% /dev
tmp 1.9G 24K 1.9G 1% /tmp
run 1.9G 684K 1.9G 1% /run
shmfs 1.9G 0 1.9G 0% /dev/shm
/dev/sda1 95G 2.4G 92G 3% /var
/dev/sda8 12M 28K 12M 1% /usr/share/oem
media 1.9G 0 1.9G 0% /media
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
tmpfs 256K 0 256K 0% /mnt/disks
tmpfs 1.0M 120K 904K 12% /var/lib/cloud
overlayfs 1.0M 124K 900K 13% /etc
These numbers might give you a starting point, but as others have pointed out already, the rest depends on your specific requirements.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With