Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

I don't fully understand how containerisation doesn't lead to over provisioning instances from the start

I understand the basics how containers and how they differ from running virtual machines. I also get that auto scaling when resources are low and how services such as AWS can horizontally scale and provision more resources for you.

What I don't understand however is how containerisation management technologies such as swarm or kubernetes stop over provisioning.

In my mind - you still have to have the resources available in order to add more containers as the container management solution is only managing the containers themselves. Correct?

So if I had an ec2 (in the AWS world) which I was using for my application and kubernetes was running on it to manage containers for my application I would still need to have a auto scaling and have to spin up another ec2 if vm itself was being pushed to capacity by my application.

Perhaps because I've not worked with container orchestration as yet I can't grasp the mechanics of this but in principal I don't see how this works harmoniously.

like image 685
tom808 Avatar asked Jun 01 '20 09:06

tom808


2 Answers

So when you consider containers you cannot view it as a single application or service per host.

Traditionally people would have an individual or multiple instances all running a single application. With containers, you would have an application per containers. So an individual host instance may be running multiple applications, this makes better use of CPU and memory resources to ensure you are using less hosts across the board.

When you look at optimizing containers, this is when people start to break down larger applications into services and microservices to help distribute key functionality into smaller and more scalable pieces of code.

Depending on the containerisation layer you can also use dynamic port mappings, this would support you having multiple of the same containers on the same host but each with a unique port.

Finally when looking at AWS, if you don't want to be scaling physical hosts a service was released in 2018 and expanded in 2019 to include Kubernetes. This service is Fargate, and allows you to run your cluster in a serverless way.

like image 51
Chris Williams Avatar answered Oct 20 '22 01:10

Chris Williams


Kubernetes for example,has Cluster Autoscaler,which increases or decreases the size of a cluster by adding or removing nodes, based on the presence of pending pods and node utilization metrics. The implementation can change depending on the solution:Google Kubernetes Engine, Amazon EKS, unmanaged AWS EC2, and others.

But if the infrastructure allows it, Kubernetes can communicate with the underlying cloud provider and provision new VMs to host new nodes as necessary, and remove them as well. It does not need to wait for a human operator to provision these instances.

like image 25
Adi Dembak Avatar answered Oct 20 '22 01:10

Adi Dembak