Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

At what point are WebSockets less efficient than Polling?

Tags:

While I understand that the answer to the above question is somewhat determined by your application's architecture, I'm interested mostly in very simple scenarios.

Essentially, if my app is pinging every 5 seconds for changes, or every minute, around when will the data being sent to maintain the open Web Sockets connection end up being more than the amount you would waste by simple polling?

Basically, I'm interested in if there's a way of quantifying how much inefficiency you incur by using frameworks like Meteor if an application doesn't necessarily need real-time updates, but only periodic checks.

Note that my focus here is on bandwidth utilization, not necessarily database access times, since frameworks like Meteor have highly optimized methods of requesting only updates to the database.

like image 793
Ecksters Avatar asked Jun 23 '17 23:06

Ecksters


People also ask

Why are polls better than WebSockets?

WebSocket advantagesWebSockets keeps a unique connection open while eliminating the latency problems that arise with long polling. Full-duplex asynchronous messaging is supported so that both the client and the server can stream messages to each other independently.

What are the disadvantages of WebSockets?

The biggest downside to using WebSocket is the weight of the protocol and the hardware requirements that it brings with it. WebSocket requires a TCP implementation, which may or may not be a problem, but it also requires an HTTP implementation for the initial connection setup.

Why WebSockets are not reliable?

It is worth to mention that WebSockets give us only an illusion of reliability. Unfortunately, the Internet connection itself is not reliable. There are many places when the connection is slow, devices often go offline, and in fact, there is still a need to be backed by a reliable messaging system.

How efficient are WebSockets?

The efficiency in WebSocket depends on the implementation and architecture of the webserver that handles them. WebSocket is a very efficient protocol with only 2 byte (!) overhead, compare to the all different HTTP header fields and HTTP cookies that is sent in every Ajax request.


2 Answers

The whole point of a websocket connection is that you don't ever have to ping the app for changes. Instead, the client just connects once and then the server can just directly send the client changes whenever they are available. The client never has to ask. The server just sends data when it's available.

For any type of server-initiated-data, this is way more efficient with bandwidth than http polling. Besides giving you much more timely results (the results are delivered immediately rather than discovered by the client only on the next polling interval).

For pure bandwidth usage, the details would depend upon the exact circumstances. An http polling request has to set up a TCP connection and confirm that connection (even more data if its an SSL connection), then it has to send the http request, including any relevant cookies that belong to that host and including relevant headers and the GET URL. Then, the server has to send a response. And, most of the time all of this overhead of polling will be completely wasted bandwidth because there's nothing new to report.

A webSocket starts with a simple http request, then upgrades the protocol to the webSocket protocol. The webSocket connection itself need not send any data at all until the server has something to send to the client in which case the server just sends the packet. Sending the data itself has far less overhead too. There are no cookies, no headers, etc... just the data. Even if you use some keep-alives on the webSocket, that amount of data is incredibly tiny compared to the overhead of an HTTP request.

So, how exactly much you would save in bandwidth depends upon the details of the circumstances. If it takes 50 polling requests before it finds any useful data, then every one of those http requests is entirely wasted compared to the webSocket scenario. The difference in bandwidth could be enormous.

You asked about an application that only needs periodic checks. As soon as you have a periodic check that results in no data being retrieved, that's wasted bandwidth. That's the whole idea of a webSocket. You consume no bandwidth (or close to no bandwidth) when there's no data to send.

like image 198
jfriend00 Avatar answered Oct 08 '22 11:10

jfriend00


I believe @jfriend00 answered the question very clearly. However, I do want to add a thought.

By throwing in a worst case (and improbable) scenario for Websockets vs. HTTP, you would clearly see that a Websocket connection will always have an advantage in regards to bandwidth (and probably all-round performance).

This is the worst case scenario for Websockets v/s HTTP:

  • your code uses Websocket connections the exact same way it uses HTTP requests, for polling.

    (which isn't something you would do, I know, but it is a worst case scenario).

  • Every polling event is answered positively - meaning that no HTTP requests were performed in vain.

This is the worst possible situation for Websockets, which are designed for pushing data rather than polling... even in this situation Websockets will save you both bandwidth and CPU cycles.

Seriously, even ignoring the DNS query (performed by the client, so you might not care about it) and the TCP/IP handshake (which is expensive for both the client and the server), a Websocket connection is still more performant and cost-effective.

I'll explain:

Each HTTP request includes a lot of data, such as cookies and other headers. In many cases, each HTTP request is also subject to client authentication... rarely is data given away to anybody.

This means that HTTP connections pass all this data (and possibly perform client authentication) once per request.[Stateless]

However, Websocket connections are stateful. The data is sent only once (instead of each time a request is made). Client authentication occurs only during the Websocket connection negotiation.

This means that Websocket connections pass the same data (and possibly perform client authentication) once per connection (once for all polls).

So even in this worst case scenario, where polling is always positive and Websockets are used for polling instead of pushing data, Websockets will still save your server both bandwidth and other resources (i.e. CPU time).

I think the answer to your question, simply put, is "never". Websockets are never less efficient than polling.

like image 23
Myst Avatar answered Oct 08 '22 10:10

Myst