I have looked up online and all I seem to find are answers related to the question of "how does Node benefit from running in a multi core cpu?"
But. If you have a machine with just one core, you can only be running one process at any given time. (I am considering task scheduling here). And node uses a single threaded model.
My question: is there any scenario in which it makes sense to run multiple node processes in one core? And if the process is a web server that listens on a port, how can this ever work given that only one process can listen?
If you have 4 cores you can have 4 (node or other) processes executing simultaneously, one on each core. If you have more node instances than you have cores, each node process will contend for cpu with the OS automatically handling the scheduling as it would for any other process.
NodeJS uses Javascript to develop server side applications and shares the same behavior. It runs on one CPU core regardless of how many CPU cores you have in your machine or a virtual machine in the cloud.
Node. js runs JavaScript code in a single thread, which means that your code can only do one task at a time. However, Node. js itself is multithreaded and provides hidden threads through the libuv library, which handles I/O operations like reading files from a disk or network requests.
A cluster module executes the same Node. js process multiple times. Therefore, the first thing you need to do is to identify what portion of the code is for the master process and what portion is for the workers.
My question: is there any scenario in which it makes sense to run multiple node processes in one core?
Yes, there are some scenarios. See details below.
And if the process is a web server that listens on a port, how can this ever work given that only one process can listen?
The node.js clustering module creates a scenario where there is one listener on the desired port, but incoming requests are shared among all the clustered processes. More details to follow...
You can use the clustering module to run more than one process that are all configured to handle incoming requests on the same port. If you want to know how incoming requests are shared among the different clustered processes, you can read the explanation in this blog post. In a nutshell, there ends up being only one listener on the desired port and the incoming requests are shared among the various clustered processes.
As to whether you could benefit from more processes than you have cores, the answer is that it depends on what type of benefit you are looking for. If you have a properly written server with all async I/O, then adding more processes than cores will likely not improve your overall throughput (as measured by requests per second that your server could process).
But, if you have any CPU-heavy processing in your requests, having a few more processes may provide a bit fairer scheduling between simultaneous requests because the OS will "share" the CPU among each of your processes. This will likely slightly decrease overall throughput (because of the added overhead of task switching the CPU between processes), but it may make request processing more even when there are multiple requests to be processed together.
If your requests don't have much CPU-heavy processing and are really just waiting for I/O most of the time, then there's probably no benefit to adding more processes than cores.
So, it really depends what you want to optimize for and what your situation is.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With