I have one node server running on EC2 instance and client is also running on same EC2 instance, Client open websocket connection to communicate node server, it is working in QA and Dev AWS environment but same web connection is getting close after 60 seconds of being idle in prod environment ,I am running client and node server behind ELB in aws environment.
Client Code:
ws = new WebSocket('ws://localhost:8443');
ws.onclose = function () {
console.log("Websocket connection has been closed.");
clientObj.emit('LogoffSuccess', 'LogoffSuccessfully');
};
ws.onerror=function(event)
{
console.log(event.data);
};
ws.addEventListener('open', function (event) {
console.log('Websocket connection has been opened');
ws.send(JSON.stringify(loginCreds));
});
Node server Code below:
const wss = new WebSocket.Server({ server: app });
const clients = {};
const idMap = {};
wss.on(`connection`, ws => {
const headers = ws.upgradeReq.headers;
const host = headers.host;
const key = ws.upgradeReq.headers[`sec-websocket-key`];
ctiServer.on(`responseMessage`, message => {
clients[message.AgentId].send(JSON.stringify(message));
});
ws.on(`message`, message => {
log.info(`Message received. Host: ${host}, Msg: ${message}`);
if (JSON.parse(message).EventName === `Login`) {
clients[JSON.parse(message).AgentId] = ws;
idMap[key] = JSON.parse(message).AgentId;
}
ctiServer.processIncomingRequest(message);
});
ws.on(`close`, () => {
log.info(`Connection closed. Host: ${host}`);
const message = {
EventName: `Logoff`,
AgentId: idMap[key],
EventData: {}
};
});
});
ELB with TCP protocol supports the WebSockets. We have successfully implemented this for our customers. If you have an SSL certificate, then deploy it at the app server level.
The WebSocket is closed before the connection is established error message indicates that some client code, or other mechanism, has closed the websocket connection before the connection was fully established.
Elastic Load Balancing (ELB) now offers support for configurable idle timeouts. This setting allows you to specify the length of time that a connection should remain open while in an idle state. Until now, ELB provided a default idle timeout of 60 seconds for all load balancers.
Application Load Balancing for AWS Application Load Balancers support content-based routing, and supports applications that run in containers. They support a pair of industry-standard protocols (WebSocket and HTTP/2) and also provide additional visibility into the health of the target instances and containers.
By default, Elastic Load Balancing sets the idle timeout value to 60 seconds. Therefore, if the target doesn't send some data at least every 60 seconds while the request is in flight, the load balancer can close the front-end connection. To ensure that lengthy operations such as file uploads have time to complete, send at least 1 byte of data before each idle timeout period elapses, and increase the length of the idle timeout period as needed.
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/application-load-balancers.html#connection-idle-timeout
Note that your interests are best served by periodically sending traffic to keep the connection alive. You can set the idle timeout to up to 4000 seconds in an Application Load Balancer, but you will find that stateful intermediate network infrastructure (firewalls, NAT devices) tends to reset connections before they are actually idle for so long.
PING!
Write a ping implementation (or a nil
message implementation)...
...otherwise the AWS proxy (probably nginx) will shut down the connection after a period of inactivity (60 seconds in your case, but it's a bit different on different systems).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With