I have the same problem as the following: Dual nginx in one Kubernetes pod
In my Kubernetes Deployment
template, I have 2 containers that are using the same port 80.
I understand that containers within a Pod
are actually under the same network namespace, which enables accessing another container in the Pod
with localhost
or 127.0.0.1
.
It means containers can't use the same port.
It's very easy to achieve this with the help of docker run
or docker-compose
, by using 8001:80
for the first container and 8002:80
for the second container.
Is there any similar or better solution to do this in Kubernetes Pod ? Without separating these 2 containers into different Pods.
The default protocol for Services is TCP; you can also use any other supported protocol. As many Services need to expose more than one port, Kubernetes supports multiple port definitions on a Service object. Each port definition can have the same protocol , or a different one.
Pods that run multiple containers that need to work together. A Pod can encapsulate an application composed of multiple co-located containers that are tightly coupled and need to share resources.
Multiple containers in the same Pod share the same IP address. They can communicate with each other by addressing localhost . For example, if a container in a Pod wants to reach another container in the same Pod on port 8080, it can use the address localhost:8080 .
Remember that every container in a pod runs on the same node, and you can't independently stop or restart containers; usual best practice is to run one container in a pod, with additional containers only for things like an Istio network-proxy sidecar.
Basically I totally agree with @David's and @Patric's comments but I decided to add to it a few more things expanding it into an answer.
I have the same problem as the following: Dual nginx in one Kubernetes pod
And there is already a pretty good answer for that problem in a mentioned thread. From the technical point of view it provides ready solution to your particular use-case however it doesn't question the idea itself.
It's very easy to achieve this with the help of docker run or docker-compose, by using 8001:80 for the first container and 8002:80 for the second container.
It's also very easy to achieve in Kubernetes. Simply put both containers in different Pods
and you will not have to manipulate with nginx config to make it listen on a port different than 80
. Note that those two docker containers that you mentioned don't share a single network namespace and that's why they can both listen on ports 80
which are mapped to different ports on host system (8001
and 8002
). This is not the case with Kubernetes Pods. Read more about microservices architecture and especially how it is implemented on k8s and you'll notice that placing a few containers in a single Pod
is really rare use case and definitely should not be applied in a case like yours. There should be a good reason to put 2 or more containers in a single Pod
. Usually the second container has some complimentary function to the main one.
There are 3 design patterns for multi-container Pods, commonly used in Kubernetes: sidecar, ambassador and adapter. Very often all of them are simply referred to as sidecar containers.
Note that 2 or more containers coupled together in a single Pod
in all above mentioned use cases have totally different function. Even if you put more than just one container in a single Pod
(which is most common), in practice it is never a container of the same type (like two nginx servers listening on different ports in your case). They should be complimentary and there should be a good reason why they are put together, why they should start and shut down at the same time and share same network namespace. Sidecar container with a monitoring agent running in it has complimentary function to the main container which can be e.g. nginx webserver. You can read more about container design patterns in general in this article.
I don't have a very firm use case, because I'm still very new to Kubernetes and the concept of a cluster.
So definitely don't go this way if you don't have particular reason for such architecture.
My initial planning of the cluster is putting all my containers of the system into a pod. So that I can replicate this pod as many as I want.
You don't need a single Pod
to replicate it. You can have in your cluster a lot of replicaSets
(usually managed by Deployments
), each of them taking care of running declared number of replicas of a Pod
of a certain kind.
But according to all the feedback that I have now, it seems like I going in the wrong direction.
Yes, this is definitely wrong direction, but it was actually already said. I'd like only to highlight why namely this direction is wrong. Such approach is totally against the idea of microservices architecture and this is what Kubernetes is designed for. Putting all your infrastructure in a single huge Pod
and binding all your containers tightly together makes no sense. Remember that a Pod
is the smallest deployable unit in Kubernetes and when one of its containers crashes, the whole Pod
crashes. There is no way you can manually restart just one container in a Pod
.
I'll review my structure and try with the suggests you all provided. Thank you, everyone! =)
This is a good idea :)
I believe what you need to do is specify a different Container Port for each container in the pod. Kubernetes allows you specify the port each container exposes using this parameter in the pod definition file. You can then create services pointing to same pods but different ports.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With