So I am using node.js and socket.io. I have this little program that takes the contents of a text box and sends it to the node.js server. Then, the server relays it back to other connected clients. Kind of like a chat service but not exactly.
Anyway, what if the user were to type 2-10k worth of text and try to send that? I know I could just try it out and see for myself but I'm looking for a practical, best practice limit on how much data I can do through an emit.
Both server and client node processes use 95-100% of a CPU core each. So pure throughput looks ok. I can emit 100 messages per second to 100 local clients at 55% CPU usage on the server process.
socket. emit - This method is responsible for sending messages. socket. on - This method is responsible for listening for incoming messages.
emit('clientEvent', 'Sent an event from the client!' ); </script> <body>Hello world</body> </html> To handle these events, use the on function on the socket object on your server. var app = require('express')(); var http = require('http').
Basic usage of socket.io causes incremental memory usage (about +4mb every second). Even when nothing is transmitting.
As of v3, socket.io has a default message limit of 1 MB. If a message is larger than that, the connection will be killed.
You can change this default by specifying the maxHttpBufferSize
option, but consider the following (which was originally written over a decade ago, but is still relevant):
Node and socket.io don't have any built-in limits. What you do have to worry about is the relationship between the size of the message, number of messages being send per second, number of connected clients, and bandwidth available to your server – in other words, there's no easy answer.
Let's consider a 10 kB message. When there are 10 clients connected, that amounts to 100 kB of data that your server has to push out, which is entirely reasonable. Add in more clients, and things quickly become more demanding: 10 kB * 5,000 clients = 50 MB.
Of course, you'll also have to consider the amount of protocol overhead: per packet, TCP adds ~20 bytes, IP adds 20 bytes, and Ethernet adds 14 bytes, totaling 54 bytes. Assuming a MTU of 1500 bytes, you're looking at 8 packets per client (disregarding retransmits). This means you'll send 8*54=432 bytes of overhead + 10 kB payload = 10,672 bytes per client over the wire.
10.4 kB * 5000 clients = 50.8 MB.
On a 100 Mbps link, you're looking at a theoretical minimum of 4.3 seconds to deliver a 10 kB message to 5,000 clients if you're able to saturate the link. Of course, in the real world of dropped packets and corrupted data requiring retransmits, it will take longer.
Even with a very conservative estimate of 8 seconds to send 10 kB to 5,000 clients, that's probably fine in chat room where a message comes in every 10-20 seconds.
So really, it comes down to a few questions, in order of importance:
With those questions answered, you can determine the maximum size of a message that your infrastructure will support.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With