Initially we have two AWS EC2 instances with node.js running behind a load balancer with sticky sessions. As the load increases more instances are added.
But we are facing problems with this approach. As out application is mainly for workshops, the load usually increases within a short period of time (workshop start) and every workshop participant has a sticky session with the first two instances and the new ones have almost none. Because of this the performance stays bad.
First thought was: let's disable the sticky sessions. But that destroys our websockets because they need sticky sessions (at least this is what i've read). Another problem is with decreasing load. Instances shut down and socket-connections also get lost.
Is there an approach to shift user-sessions between instances or get websockets work without sticky sessions (maybe with Redis)?
The solution was an Application Load Balancer (see comment).
At first we had to disable polling, because this did not work with the rest. This is done by defining the transports manually.
let ioSocket = io('', {
path: '/socket.io-client'
transports: ['websocket']
After that we set up a standard application load balancer with two target groups: one for websockets and one for all other requests. The rule for the websocket target group matches a specific path via regex:
Last problem was scaling: if one of the instances shuts down because of lower load on the cluster connections may get lost. This was fixed with a simple reconnect after a disconnect in the client (in our case an angular application):
[...]
this.socket.on('disconnect', () => {
// Reconnect after connection loss
this.connect();
});
[...]
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With