My node server app is using the ws WebSocket library:
socket.send(data, err => {
if (err == null) {...success...}
else {...failure...}
The client listens for the message also using ws:
ws.on('message', message => ...handler...);
I'm hunting down a bug where sometimes, if the client's network connection is lost (e.g. I turn wifi off) then it looks like the server reports that a message has been successfully sent but the client's handler never got invoked.
QUESTION: is this possible? is this indeed one of the expected failure modes of the 'ws' WebSocket library? What does success from socket.send()
actually prove?
(I know that all network protocols have windows of vulnerability in them somewhere. I'm trying to put my finger on where precisely that window is for the ws library so I can build the right semantics on top of it. I haven't yet found from the docs where exactly that window is for ws.)
MAYBE: maybe socket.send
reports success once it has placed the message in its outgoing TCP buffer. But then if the network goes down before it has exited the TCP buffer, the message will be lost forever.
MAYBE: maybe socket.send
reports success once it has delivered the message into the recipient's TCP message queue and received a TCP ACK confirming that it has been delivered, but then if the client's network goes down before the handler has been invoked then the message will be lost forever.
Short answer: No. Long answer: WebSocket runs over TCP, so on that level @EJP 's answer applies. WebSocket can be "intercepted" by intermediaries (like WS proxies): those are allowed to reorder WebSocket control frames (i.e. WS pings/pongs), but not message frames when no WebSocket extension is in place.
All the frequently updated applications used WebSocket because it is faster than HTTP Connection. When we do not want to retain a connection for a particular amount of time or reuse the connection for transmitting data; An HTTP connection is slower than WebSockets.
The main reason for this large difference is that the browser limits the number of concurrent HTTP connections (6 by default in Chrome), while there is no limitation how many messages a websocket connection can send or receive.
WebSockets are stable and ready for developers to start creating innovative applications and services. This tutorial provides a simple introduction to the W3C WebSocket API and its underlying WebSocket protocol.
The only way to know the client received the webSocket message for sure is to have the client send your own custom message back to the server to indicate you received it and for you to wait for that message on the server.
That's the ONLY end-to-end test that is guaranteed. Anything else relies on something else lower in the chain that does not represent that the client actually received and processed the message. Because you want to know that the application level has received and processed the message, you have to ACK back at the application level to legitimately know that the application level received it.
FYI, socket.io (built on top of webSocket) will do this for you. If you pass a callback to socket.emit()
in socket.io, it will request an ack back from the client and when that callback is called, you will know that it was actually received. You can implement it yourself in webSocket or use socket.io and it has the feature built-in.
I do not actually know at what level websocket.send()
decides success, but the main point is that it is at a level lower than the application level (as you've already outlined in your question) so if you want application level certainty, you can't rely on its capability alone - you have to add an additional application level ack or use the one already built into socket.io.
According to the documentation, the callback is called without an error when the message has been sent successfully. I suspect (but I didn't look very closely) this means that it has been handed off successfully to the OS (placed in the send queue). I'm pretty sure it doesn't mean that the client has sent a TCP ACK.
The "problem" with TCP connections is that because of their resilience, it may take a long time before a server gets notified that a connection to a client has been lost (because it may not actually have been lost permanently), and without some sort of acknowledgement layer on top of the connection, you will never know if a message has actually been delivered to a client.
The ws
documentation provides some example code on how to detect broken connections: https://github.com/websockets/ws#how-to-detect-and-close-broken-connections
TL;DR: periodically send a "ping" message to each client, and expect that the client returns a "pong" message within a certain timeframe.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With