I'm currently developing a Java WebSocket Client Application and I have to make sure that every message from the server is received by the client. Is it possible that I lose some messages (once they are sent from the server) due to a connection interruption? WebSocket is based on TCP so this shouldn't happen right?
This message is sent once after the user connects to the WebSocket. It indicates that any message or event on a channel that arrives after the user recieves the SESSION_ESTABLISHED message is guaranteed to be delivered to the user as long as the WebSocket stays open.
It has built-in means of detecting transmission errors or packet corruption and attempting retransmissions in those cases, but delivery can still fail. It guarantees that if the packet is not delivered, the caller will get an error so the caller can know. Since websocket is built on top of TCP, it has the same issue.
A WebSocket connection can in theory last forever. Assuming the endpoints remain up, one common reason why long-lived TCP connections eventually terminate is inactivity.
It can happen. TCP guarantees the order of packets, but it does not mean that all packets sent from a server reach a client even when an unrecoverable trouble happens in an underlying network. Imagine someone pulls out your LAN cable or switches off your WiFi access point at the worst timing while your application is communicating with your server. TCP does not overcome such a trouble.
To ensure that every WebSocket message sent from your server reaches your client, you have to implement some kind of SYN/ACK in the application layer.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With