Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Scalable Node.js application architecture

Tags:

In the past, I was playing with Node.js only on my local machine so I only have experience with single process Node.js applications. Now I would like to create a web application which I could publish on the web.

This web app would be something like a multiplayer game - using Socket.IO for client-server communication, Express for handling HTTP requests, grunt for task management, and so on - I would like to use other NPM packages as well for various tasks.

I would like to design the architecture of this application to

  • enable horizontal scalability (later, when I have lots of visitors, I don't have to rewrite the whole app)
  • minimize the dependencies on different execution environments (to maximize portability)

How can I achieve this using Node?

I guess the high-level architecture would consist:

  • Different server processes (each process would run an instance of Express and would handle the incoming HTTP requests).
  • There should be a load balancer somewhere.
  • Optionally: background processes which could run periodically and process the "shared data"

Since my application would be a multiplayer app where each user could interact with the other online users, I should store some common state ("shared data") somewhere which could be shared between those processes.

To keep things simple, at first I don't have to persist this shared data, so I think I should use an in-memory data store like Redis.

The big picture would look something like this:

Sample Node.js application architecture

This design raises some questions:

How to spawn the processes?

Should I use Node's child_process or the cluster modules and start worker processes manually? BTW, is it possible at all to start these manually, for example if I deploy my app to Heroku or Nodejitsu?

OR: is there a better way to store these information in a config file?

I mean, it would be better if I could configure how many server instances do I want not with editing the code but a config entry.

System boundaries?

If I spawn the processes manually, then (I guess) all processes would run on the same (virtual) server.

If this server has, let's say 4 CPU cores, then you can spawn 4 Node instances at a maximum, because if you spawn more, your CPU will make context switches which would ruin the overall performance.

What do I have to do if I need more process instances? Let's say I need 100 server instances. Do I have to deploy my app to 25 servers and spawn 4 processes on every server?

It seems to me that hosting services like Nodejitsu somehow hide this system boundary layer from you, but I don't see how does it work in practice.

Especially that there is this "shared data" provider component. I guess this provider (like a Redis server) has to run on a different server so it would be available to all processes. But in this case it could easily become a bottleneck, isn't it?

Load balancer?

If I use some hosting service, do I have to setup the load balancer layer myself?


Edit:

To answer a few practical questions: at the first step, I want to handle 4-500 concurrent users (Socket.IO connections) seamlessly. This is an amount of visitors that I can realistically achieve.

But I'm just curious that is it possible (and if yes, how?) to design an application architecture which could be easily scalable. Let's say that my website will become popular from one day to the next and instead of dealing with few hundred concurrent users, next day I have to serve few thousands.

As far as I know, cloud hosting services like Heroku and Nodejitsu could be easily adapted to these scenarios - you just have to increase the number of workers / dynos / whatever - but it only works if you have the right application architecture.

Regarding the shared data: I don't want to persist it. I just want to keep it in-memory. Some shared data provider on the one hand is needed because of Socket.IO - one user would be able to send a message to a user which is in another "node". For this I would use Redis as a shared data provider. The number of transactions which Redis needs to handle equals to the amount of the sent/recieved messages with Socket.IO, ~1000-1500 message/sec.

On the other hand, some shared data provider is needed because I want to connect the users based on several criteria. Later, background processes would periodically recalculate / refine the probability (the "weight") of those connections. I already have some idea how to implement efficient data structure to handle fast inserts/removes to this in-memory table. So the "shared data provider" component would consist of some server-side code (maybe Node.js) which could store these connections.

I know it's TL;DR but I hope it will answer all your technical questions about the problem. :)

like image 812
Zsolt Avatar asked Sep 20 '14 17:09

Zsolt


People also ask

What makes node JS scalable?

it is scalable due to load balancing. Essentially you can have multiple jobs for node to process and it can handle it with no significant burden. This makes it scalable.

Is Node JS good for scaling?

Node.js offers Easy Scalability With Node. js, you'll have no problems scaling the application horizontally across multiple servers or vertically to increase its performance on a single server.


1 Answers

Okay, this is a lot to go through. First, your separation of concerns is appropriate, you'll need a way for processes to communication, this can be via a Redis instance, or other pub/sub or req/res system (be it redis, kue, zmq, etc). NOTE: You will likely still need to shard your data/message usage if you grow significantly, at least as much as possible. You can alleviate this if you use a more complex message queue system (Rabbit, or other AMQP).

It seems your main concern is process management. In general, if you're using Heroku, you should be able to scale single process per node, but then you'll still need your coordinator node(s) outside. If you are self-hosting (not via heroku or similar) then you should look at pm2 or forever ... You can then bring up multiple instances...

For the most part your logistics/infrastructure issues will vary based on your needs. Not to mention newer strategies involving CI/CD, docker, and others. Or your database use.

like image 92
Tracker1 Avatar answered Oct 07 '22 17:10

Tracker1