I have a NodeJS server (Express) and I am spreading the requests to multiple processors using the cluster module example on nodeJs site.
if (cluster.isMaster) {
for (var i = 0; i < numCPUs; i++) {
cluster.fork();
};
cluster.on('exit', function(worker, code, signal) {
console.log('worker ' + worker.process.pid + ' died');
cluster.fork();
});
} else {
server.listen(app.get('port'), function(){
console.log('HTTP server on port ' + app.get('port') + ' - running as ' + app.settings.env);
});
// setup socket.io communication
io.sockets.on('connection', require('./app/sockets'));
io.sockets.on('connection', require('./app/downloadSockets'));
}
The problem is that the benchmark from siege shows me that there is no increase in number of hits. This is the output of siege :
$ siege -c100 192.168.111.1:42424 -t10S
** SIEGE 3.0.5
** Preparing 100 concurrent users for battle.
The server is now under siege...
Lifting the server siege... done.
Transactions: 1892 hits
Availability: 100.00 %
Elapsed time: 10.01 secs
Data transferred: 9.36 MB
Response time: 0.01 secs
Transaction rate: 189.01 trans/sec
Throughput: 0.93 MB/sec
Concurrency: 1.58
Successful transactions: 1892
Failed transactions: 0
Longest transaction: 0.05
Shortest transaction: 0.00
After Clustering :
$ siege -c100 192.168.111.1:42424 -t10S
** SIEGE 3.0.5
** Preparing 100 concurrent users for battle.
The server is now under siege...
Lifting the server siege... done.
Transactions: 1884 hits
Availability: 100.00 %
Elapsed time: 9.52 secs
Data transferred: 9.32 MB
Response time: 0.01 secs
Transaction rate: 197.90 trans/sec
Throughput: 0.98 MB/sec
Concurrency: 1.72
Successful transactions: 1884
Failed transactions: 0
Longest transaction: 0.07
Shortest transaction: 0.00
Does that mean my server is already getting max throughput with single server probably because its a local machine or maybe Its not able to get 4 processors as there are too many processes running, I am not sure.
How do I use the cluster module to increase throghput and why is my current code not succeding ? Also I checked that It does indeed create 4 instances of the server i.e the cluster.fork works. Any tips would be very useful.
The effect is achieved by clustering or growth concurrent queries (try to increase the number of concurrent users to 300-400). Or tasks which give serious burden. Let's spend more interesting test: will download a file size of around 1 MB, plus we do delay of 5 msec and 50 msec for emulation of complex operations. For the four core processor at a local testing will be as follows (respectively for normal and cluster):
$ siege -c100 http://localhost/images/image.jpg -t10S
Normal mode (5 msec delay):
Lifting the server siege... done.
Transactions: 1170 hits
Availability: 100.00 %
Elapsed time: 9.10 secs
Data transferred: 800.79 MB
Response time: 0.27 secs
Transaction rate: 128.57 trans/sec
Throughput: 88.00 MB/sec
Concurrency: 34.84
Successful transactions: 1170
Failed transactions: 0
Longest transaction: 0.95
Shortest transaction: 0.01
Cluster mode (5 msec delay):
Lifting the server siege... done.
Transactions: 1596 hits
Availability: 100.00 %
Elapsed time: 9.04 secs
Data transferred: 1092.36 MB
Response time: 0.06 secs
Transaction rate: 176.55 trans/sec
Throughput: 120.84 MB/sec
Concurrency: 9.81
Successful transactions: 1596
Failed transactions: 0
Longest transaction: 0.33
Shortest transaction: 0.00
Normal mode (50 msec delay):
Lifting the server siege... done.
Transactions: 100 hits
Availability: 100.00 %
Elapsed time: 9.63 secs
Data transferred: 68.44 MB
Response time: 5.51 secs
Transaction rate: 10.38 trans/sec
Throughput: 7.11 MB/sec
Concurrency: 57.18
Successful transactions: 100
Failed transactions: 0
Longest transaction: 7.77
Shortest transaction: 5.14
Cluster mode (50msec delay):
Lifting the server siege... done.
Transactions: 614 hits
Availability: 100.00 %
Elapsed time: 9.24 secs
Data transferred: 420.25 MB
Response time: 0.90 secs
Transaction rate: 66.45 trans/sec
Throughput: 45.48 MB/sec
Concurrency: 59.59
Successful transactions: 614
Failed transactions: 0
Longest transaction: 1.50
Shortest transaction: 0.50
you aren't really doing anything in your example. Connect to mysql and run a heavy query, or make an http request that takes a few seconds. You notice that eventually, you will code something (or use a 3rd party library) that will block the event loop. This is when clustering will be important, since you'll essentially have an event loop for each processor. If one query is slow, and the event loop need to wait for it, it wont stop new requests that are hitting your API/application.
Also, you may want to read up on connection pooling, and specifically generic-pool on npm if you are planning on using or connecting to a database or fetching external resources.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With