Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Socket.io: How to limit the size of emitted data from client to the websocket server

I have a node.js server with socket.io. My clients use socket.io to connect to the node.js server.

Data is transmitted from clients to server in the following way:

On the client

var Data = {'data1':'somedata1', 'data2':'somedata2'};
socket.emit('SendToServer', Data);

On the server

socket.on('SendToServer', function(Data) {
    for (var key in Data) {
           // Do some work with Data[key]
    }
});

Suppose that somebody modifies his client and emits to the server a really big chunk of data. For example:

var Data = {'data1':'somedata1', 'data2':'somedata2', ...and so on until he reach for example 'data100000':'data100000'};
socket.emit('SendToServer', Data);

Because of this loop on the server...

for (var key in Data) {
       // Do some work with Data[key]
}

... the server would take a very long time to loop through all this data.

So, what is the best solution to prevent such scenarios?

Thanks

EDIT:

I used this function to validate the object:

function ValidateObject(obj) {
    var i = 0;
    for(var key in obj) {
        i++;
        if (i > 10) { // object is too big
            return false;
        }
    }
    return false;
}
like image 728
user3179196 Avatar asked Feb 11 '14 15:02

user3179196


2 Answers

So the easiest thing to do is just check the size of the data before doing anything with it.

socket.on('someevent', function (data) {
    if (JSON.stringify(data).length > 10000) //roughly 10 bytes
        return;

    console.log('valid data: ' + data);
});

To be honest, this is a little inefficient. Your client sends the message, socket.io parses the message into an object, and then you get the event and turn it back into a String.

If you want to be even more efficient then on the client side you should be enforcing max lengths of messages.

For even more efficiency (and to protect against malicious users), as packets come into Socket.io, if the length gets too long, then you should discard them. You'll either need to figure a way to extend the prototypes to do what you want or you'll need to pull the source and modify it yourself. Also, I haven't looked into the socket.io protocol but I'm sure you'll have to do more than just "discard" the packet. Also, some packets are ack-backs and nack-backs so you don't want to mess with those, either.


Side note: If you ONLY care about the number of keys then you can use Object.keys(obj) which returns an array of keys:

if (Object.keys(obj).length > 10)
    return;
like image 72
Randy Avatar answered Sep 22 '22 13:09

Randy


Probably you may consider switching to socket.io-stream and handle input stream directly.

This way you should join chunks and finally parse json input manually, but you have chance to close connection when input data length exceeds threshold you decide.

Otherwise (staying with socket.io approach) your callback won't be called until whole data stream were received. This doesn't stop your js main thread execution, but waste memory, cpu and bandwith.

On the other hand, if your only goal is to avoid overload of your processing algorithm you can continue limitting it by counting elements in the received object. For instance:

if (Object.keys(data).length > n) return; // Where n is your maximum acceptable number of elements.
// But, anyway, this doesn't control the actual size of each element.
like image 43
bitifet Avatar answered Sep 22 '22 13:09

bitifet