I'm migrating a number of applications from AWS ECS to Azure AKS and being the first production deployment for me in Kubernetes I'd like to ensure that it's set up correctly from the off.
The applications being moved all use resources at varying degrees with some being more memory intensive and others being more CPU intensive, and all running at different scales.
After some research, I'm not sure which would be the best approach out of running a single large cluster and running them all in their own Namespace, or running a single cluster per application with Federation.
I should note that I'll need to monitor resource usage per application for cost management (amongst other things), and communication is needed between most of the applications.
I'm able to set up both layouts and I'm sure both would work, but I'm not sure of the pros and cons of each approach, whether I should be avoiding one altogether, or whether I should be considering other options?
A Kubernetes cluster is a group of nodes used to deploy containerized applications. So, if you use Kubernetes for your application, you have at least one cluster. A Kubernetes cluster usually contains at least one master node and one or more worker nodes.
Multi-cluster Kubernetes is exactly what it sounds like: it's an environment in which you are using more than one Kubernetes cluster. These clusters may be on the same physical host, on different hosts in the same data center, or even in different clouds in different countries, for a multi-cloud environment.
Multiple clusters have far better scaling capabilities, which will reduce scaling impediments for developers. On the other hand, managing multiple Kubernetes clusters does introduce additional management overhead in a number of areas.
More specifically, Kubernetes is designed to accommodate configurations that meet all of the following criteria: No more than 110 pods per node. No more than 5000 nodes. No more than 150000 total pods.
Kubernetes multi-cluster is an environment with multiple Kubernetes clusters. They can be configured in several ways: Within a single physical host With different multiple hosts in the same data center
This page shows how to configure access to multiple clusters by using configuration files. After your clusters, users, and contexts are defined in one or more configuration files, you can quickly switch between clusters by using the kubectl config use-context command.
Working with a single cluster generally means easier management of user authentication, Kubernetes version upgrades, cluster visibility, node management, and application deployments with CI/CD. Security, however, is a serious concern because of the lack of isolation as a default.
Zalando has close to 100 Kubernetes clusters to Monzo’s single cluster. In this 2018 Kubecon video from the same event as Monzo’s keynote, Mikkel Larsen, a software engineer at Zalando, describes their multi-cluster approach, citing both team autonomy and reliability as two advantages.
Because you are at the beginning of your kubernetes journey I would go with separate clusters for each stage you have (or at least separate dev and prod). You can very easily take your cluster down (I did it several times with resource starvation). Also not setting correctly those network policies you might find that services from different stages/namespaces (like test and sandbox) communicate with each other. Or pipelines that should deploy dev to change something in other namespace. Why risk production being affected by dev work?
Even if you don't have to upgrade the control plane yourself, aks still has its versions and flags and it is better to test them before moving to production on a separate cluster.
So my initial decision would be to set some hard boundaries: different clusters. Later once you get more knowledge with aks and kubernetes you can review your decision.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With