Hi, I have a question regarding the performance, reliability and growth potential of two setups that I've encountered. I'm far from Docker or cluster expert, so any advise or tip would be really appreciated.
Typical MEAN stack web application running on Node v6.9.4
. Nothing fancy, standard setup.
a) Standard Linux server with NGINX (reverse proxy) and NodeJS
b) Standard Linux server with NGINX (reverse proxy) and NodeJS Cluster. Using Node's Cluster module
c) "Dockerized" NodeJS app cloned 3 times (3 containers) using NGINX's load balancer. Credit for the idea goes to Anand Sankar
// Example nginx load balance config server app1:8000 weight=10 max_fails=3 fail_timeout=30s; server app2:8000 weight=10 max_fails=3 fail_timeout=30s; server app3:8000 weight=10 max_fails=3 fail_timeout=30s; // Example docker-compose.yml version: '2' services: nginx: build: docker/definitions/nginx links: - app1:app1 - app2:app2 - app3:app3 ports: - "80:80" app1: build: app/. app2: build: app/. app3: build: app/.
d) All together. "Dockerized" NodeJS app (multiple containers) with Cluster configured inside and on top of the 3 containers - NGINX's load balancer.
If I get this correctly, having 3 x NodeJS containers running the app, where each of these app replicas support the NodeJS clustering, should lead to incredible performance.
3 x containers x 4 workers, should mean 12 nodes to handle all requests/responses. If that's correct, the only drawback would be the more powerful, in terms of hardware, machine to support this.
Anyway, my logic may be totally wrong, so I'm looking for any comments or feedback on that!
My goal is to have production ready, stable environments, which are ready to take some load. We're not speaking about thousands of concurrent connections at the same time, etc. Keeping the infrastructure scalable and flexible is a big "+".
Hopefully, the question makes sense. Sorry for the long post, but I wanted to keep it clear.
Thank you!
Docker Compose is a tool that helps us overcome this problem and efficiently handle multiple containers at once. Also used to manage several containers at the same time for the same application. This tool can become very powerful and allow you to deploy applications with complex architectures very quickly.
Docker images therefore seem like a good way to get a reproducible environment for measuring CPU performance of your code. There are, however, complications. Sometimes, running under Docker can actually slow down your code and distort your performance measurements.
No more than 110 pods per node. No more than 5000 nodes. No more than 150000 total pods. No more than 300000 total containers.
Docker containers are designed to run anywhere - in an in-house data center, the cloud, or a hybrid and multi-cloud environment. Out of the box, Docker containers tend to outperform other virtualization methods, but they can be further optimized to offer even better performance.
From my experience I feel options C or D are most maintainable, and assuming you had the resources available on the server D would likely be the most performant.
That said, have you looked into Kubernetes at all? I found there’s a slight learning curve, but it’s a great resource that allows for dynamic scaling, load balancing, and offers much smoother deployment options than Docker Compose. The biggest is hosting a Kubernetes cluster is more expensive than a single server.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With