Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Kubernetes: multiple pods or multiple deployments?

Tags:

I am using kubernetes to deploy a simple application. The pieces are:

  • a rabbitMQ instance
  • a stateless HTTP server
  • a worker that takes jobs from the message queue and processes them

I want to be able to scale the HTTP server and the worker up and down independently of each other. Would it be more appropriate for me to create a single deployment containing one pod for the HTTP server and one for the worker, or separate deployments for the HTTP server / worker?

like image 678
Alex Flint Avatar asked Jun 05 '18 05:06

Alex Flint


People also ask

Why does Kubernetes use multiple pods?

Why does Kubernetes allow more than one container in a Pod? Containers in a Pod runs on a "logical host": they use the same network namespace (same IP address and port space), IPC namespace and, optionally, they can use shared volumes. Therefore, these containers can efficiently communicate, ensuring data locality.

Which is better pod or deployment?

Their Role in Building and Managing Software As we now know, a pod is the smallest unit of Kubernetes used to house one or more containers and run applications in a cluster, while deployment is a tool that manages the performance of a pod.

Can a Kubernetes deployment have multiple pods?

A Deployment is meant to represent a single group of PODs fulfilling a single purpose together. You can have many Deployments work together in the virtual network of the cluster. For accessing a Deployment that may consist of many PODs running on different nodes you have to create a Service.

How many pods should I use Kubernetes?

For these reasons, Kubernetes recommends a maximum number of 110 pods per node. Up to this number, Kubernetes has been tested to work reliably on common node types.


1 Answers

You should definitely choose different deployment for HTTP Server and the worker. For following reasons:

  • Your scaling characteristics are different for both of them. It does not make sense to put them in the same deployment

  • The parameters on which you will scale will be different too. For HTTP server it might be RPS and for worker application, it will number of items pending/to be processed state. You can create HPA and scale them for different parameters that suit them best

  • The metrics & logs that you want to collect and measure for each would be again different and would make sense to keep them separate.

I think the Single Responsibility principle fits well too and would unnecessarily mix things up if you keep then in same pod/deployment.

like image 145
Vishal Biyani Avatar answered Oct 11 '22 10:10

Vishal Biyani