I have 2 processes P1 and P2 in my System that communicate very frequently with each other over TCP. For this reason they are both hosted on the same VM. I am thinking of eliminating the VM and instead hosting my System in containers on the physical machine. If I dockerize my System, I have 2 options:
Kindly guide me on the merits and demerits of the above 2 approaches.
What is the overhead involved in terms of communication latency in approach 1?
The main issue with several processes in one container is signal management: how do you (cleanly) stop all your processes?
That is the "PID 1 zombie reaping issue", which is why, whenever you have to manage multiple process, a base image like phusion/baseimage-docker
can help.
The more general issue is a microservice decoupling one: if both P1 and P2 are stateful and depends on one another, keeping them in the same container makes sense.
What is the overhead involved in terms of communication latency
It depends on the type of process involved, but the overhead is minimal as both processes are running on the same docker host (even if they are in separate containers)
It's an issue of scaling too. If you want to auto-scale P1 when let's say usage for P1 crosses a certain threshhold (Heap, Throughput), with a single container approach you would be duplicating P2 too although that may not be required.
Thus, one container one process, scales better and provides fine grained management(orchestration) control.
As far as Latency is concerned, it really depends on your deployment architecture for the containers. If both the containers are hosted on the same Machine, Latency is going to be insignificant while at the same time, if they are in let's say 2 different AWS zones, it starts having an impact.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With