I have a real-time application with clients using websockets to connect with a Spring Framework server, which is running Spring Boot Tomcat. I want the server to quickly (within 5 seconds) detect when a client stops responding due to a network disconnect or other issue and close the websocket.
I have tried
Setting the max session idle timeout as described in the documentation as "Configuring the WebSocket Engine" http://docs.spring.io/spring/docs/current/spring-framework-reference/html/websocket.html
@Bean
public WebSocketHandler clientHandler() {
return new PerConnectionWebSocketHandler(ClientHandler.class);
}
@Bean
public ServletServerContainerFactoryBean createWebSocketContainer() {
ServletServerContainerFactoryBean container =
new ServletServerContainerFactoryBean();
container.setMaxSessionIdleTimeout(5000);
container.setAsyncSendTimeout(5000);
return container;
}
I am not sure this is implemented correctly because I do not see the link between the ServletServerContainerFactoryBean and my generation of ClientHandlers.
Sending ping messages from server every 2.5 seconds. After I manually disconnect the client by breaking the network connection, the server happily sends pings for another 30+ seconds until a transport error appears.
1 and 2 simultaneously
1 and 2 and setting server.session-timeout = 5
in application.properties
My methodology for testing this is to:
How does a Spring FrameworkTomcat server quickly detect that a client has been disconnected or not responding to close the websocket?
However, the connection between a client and your WebSocket app closes when no traffic is sent between them for 60 seconds.
User presses refresh. Browser closes down all resources associated with the original page, including closes the webSocket connection.
A WebSocket times out if no read or write activity occurs and no Ping messages are received within the configured timeout period. The container enforces a 30-second timeout period as the default.
The approach I eventually took was to implement an application-layer ping-pong protocol.
p
to the client. n
ping messages without receiving a pong response, it generates a timeout event. n*p
time. There should be a much simpler way of implementing this using timeouts in the underlying TCP connection.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With