WebSockets are Full-Duplex meaning both the client and the server can send and receive messages across the channel. Long Polling is Half-Duplex meaning that a new request-response cycle is required each time the client wants to communicate something to the server.
HTTP Long Polling is a technique used to push information to a client as soon as possible on the server. As a result, the server does not have to wait for the client to send a request. In Long Polling, the server does not close the connection once it receives a request from the client.
WebSockets keeps a unique connection open while eliminating the latency problems that arise with long polling. Full-duplex asynchronous messaging is supported so that both the client and the server can stream messages to each other independently.
Long-polling opens an HTTP request and remains open until an update is received. Upon receiving an update, a new request is immediately opened awaiting the next update. Server-sent events(SSE) rely on a long-lived HTTP connection, where updates are continuously sent to the client.
There are more differences than the ones you have identified.
Duplex/directional:
In order of increasing latency (approximate):
CORS (cross-origin support):
Native binary data (typed arrays, blobs):
Bandwidth in decreasing efficiency:
Mobile device support:
Javascript usage complexity (from simplest to most complicated). Admittedly complexity measures are somewhat subjective.
Also note that there is a W3C proposal for standardizing HTTP streaming called Server-Sent Events. It is currently fairly early in it's evolution and is designed to provide a standard Javascript API with comparable simplicity to WebSockets.
Some great answers from others that cover a lot of ground. Here's a little bit extra.
The only advantage of WebSockets over plugins like Java Applets, Flash or Silverlight is that WebSockets are natively built into browsers and does not rely on plugins.
If by this you mean that you can use Java Applets, Flash, or Silverlight to establish a socket connection, then yes, that is possible. However you don't see that deployed in the real world too often because of the restrictions.
For example, intermediaries can and do shutdown that traffic. The WebSocket standard was designed to be compatible with existing HTTP infrastructure and so is far less prone to being interfered with by intermediaries like firewalls and proxies.
Moreover, WebSocket can use port 80 and 443 without requiring dedicated ports, again thanks to the protocol design to be as compatible as possible with existing HTTP infrastructure.
Those socket alternatives (Java, Flash, and Silverlight) are difficult to use securely in a cross-origin architecture. Thus people often attempting to use them cross-origin will tolerate the insecurities rather than go to the effort of doing it securely.
They can also require additional "non-standard" ports to be opened (something administrators are loathe to do) or policy files that need to be managed.
In short, using Java, Flash, or Silverlight for socket connectivity is problematic enough that you don't see it deployed in serious architectures too often. Flash and Java have had this capability for probably at least 10 years, and yet it's not prevalent.
The WebSocket standard was able to start with a fresh approach, bearing those restrictions in mind, and hopefully having learned some lessons from them.
Some WebSocket implementations use Flash (or possibly Silverlight and/or Java) as their fallback when WebSocket connectivity cannot be established (such as when running in an old browser or when an intermediary interferes).
While some kind of fallback strategy for those situations is smart, even necessary, most of those that use Flash et al will suffer from the drawbacks described above. It doesn't have to be that way -- there are workarounds to achieve secure cross-origin capable connections using Flash, Silverlight, etc -- but most implementations won't do that because it's not easy.
For example, if you rely on WebSocket for a cross-origin connection, that will work fine. But if you then run in an old browser or a firewall/proxy interfered and rely on Flash, say, as your fallback, you will find it difficult to do that same cross-origin connection. Unless you don't care about security, of course.
That means it's difficult have a single unified architecture that works for native and non-native connections, unless you're prepared to put in quite a bit of work or go with a framework that has done it well. In an ideal architecture, you wouldn't notice if the connections were native or not; your security settings would work in both cases; your clustering settings would still work; your capacity planning would still hold; and so on.
The only advantage of WebSockets over http streaming is that you don't have to make an effort to "understand" and parse the data received.
It's not as simple as opening up an HTTP stream and sitting back as your data flows for minutes, hours, or longer. Different clients behave differently and you have to manage that. For example some clients will buffer up the data and not release it to the application until some threshold is met. Even worse, some won't pass the data to the application until the connection is closed.
So if you're sending multiple messages down to the client, it's possible that the client application won't receive the data until 50 messages worth of data has been received, for example. That's not too real-time.
While HTTP streaming can be a viable alternative when WebSocket is not available, it is not a panacea. It needs a good understanding to work in a robust way out in the badlands of the Web in real-world conditions.
Are there any other significant differences that I am missing?
There is one other thing that noone has mentioned yet, so I'll bring it up.
The WebSocket protocol was designed to a be a transport layer for higher-level protocols. While you can send JSON messages or what-not directly over a WebSocket connection, it can also carry standard or custom protocols.
For example, you could do AMQP or XMPP over WebSocket, as people have already done. So a client could receive messages from an AMQP broker as if it were connected directly to the broker itself (and in some cases it is).
Or if you have an existing server with some custom protocol, you can transport that over WebSocket, thus extending that back-end server to the Web. Often an existing application that has been locked in the enterprise can broaden it's reach using WebSocket, without having to change any of the back-end infrastructure.
(Naturally, you'd want to be able to do all that securely so check with the vendor or WebSocket provider.)
Some people have referred to WebSocket as TCP for the Web. Because just like TCP transports higher-level protocols, so does WebSocket, but in a way that's compatible with Web infrastructure.
So while sending JSON (or whatever) messages directly over WebSocket is always possible, one should also consider existing protocols. Because for many things you want to do, there's probably a protocol that's already been thought of to do it.
I'm sorry if I am re-asking or combining many of the questions already on SO into a single question, but I just want to make perfect sense out of all the info that is out there on SO and the web regarding these concepts.
This was a great question, and the answers have all been very informative!
If I may ask one additional thing: I came across in an article somewhere that says that http streaming may also be cached by proxies while websockets are not. what does that mean?
(StackOverflow limits the size of comment responses, so I've had to answer here rather than inline.)
That's a good point. To understand this, think about a traditional HTTP scenario... Imagine a browser opened a web page, so it requests http://example.com, say. The server responds with HTTP that contains the HTML for the page. Then the browser sees that there are resources in the page, so it starts requesting the CSS files, JavaScript files, and images of course. They are all static files that will be the same for all clients requesting them.
Some proxies will cache static resources so that subsequent requests from other clients can get those static resources from the proxy, rather than having to go all the way back to the central web server to get them. This is caching, and it's a great strategy to offload requests and processing from your central services.
So client #1 requests http://example.com/images/logo.gif, say. That request goes through the proxy all the way to the central web server, which serves up logo.gif. As logo.gif passes through the proxy, the proxy will save that image and associate it with the address http://example.com/images/logo.gif.
When client #2 comes along and also requests http://example.com/images/logo.gif, the proxy can return the image and no communication is required back to the web server in the center. This gives a faster response to the end user, which is always great, but it also means that there is less load on the center. That can translate to reduced hardware costs, reduced networking costs, etc. So it's a good thing.
The problem arises when the logo.gif is updated on the web server. The proxy will continue to serve the old image unaware that there is a new image. This leads to a whole thing around expiry so that the proxy will only cache the image for a short time before it "expires" and the next request goes through the proxy to the web server, which then refreshes the proxy's cache. There are also more advanced solutions where a central server can push out to known caches, and so on, and things can get pretty sophisticated.
How does this tie in to your question?
You asked about HTTP streaming where the server is streaming HTTP to a client. But streaming HTTP is just like regular HTTP except you don't stop sending data. If a web server serves an image, it sends HTTP to the client that eventually ends: you've sent the whole image. And if you want to send data, it's exactly the same, but the server just sends for a really long time (like it's a massively gigantic image, say) or even never finishes.
From the proxy's point of view, it cannot distinguish between HTTP for a static resource like an image, or data from HTTP streaming. In both of those cases, the client made a request of the server. The proxy remembered that request and also the response. The next time that request comes in, the proxy serves up the same response.
So if your client made a request for stock prices, say, and got a response, then the next client may make the same request and get the cached data. Probably not what you want! If you request stock prices you want the latest data, right?
So it's a problem.
There are tricks and workarounds to handle problems like that, it is true. Obviously you can get HTTP streaming to work since it's it's in use today. It's all transparent to the end user, but the people who develop and maintain those architectures have to jump through hoops and pay a price. It results in over-complicated architectures, which means more maintenance, more hardware, more complexity, more cost. It also means developers often have to care about something they shouldn't have to when they should just be focussing on the application, GUI, and business logic -- they shouldn't have to be concerned about the underlying communication.
HTTP limits the number of connections a client can have with a server to 2 (although this can be mitigated by using subdomains) and IE has been known to enforce this eagerly. Firefox and Chrome allow more (although I can't remember of the top of my head exactly how many). This might not seem like a huge issue but if you are using 1 connection constantly for real-time updates, all other requests have to bottleneck through the other HTTP connection. And there is the matter of having more open connections from clients puts more load on the server.
WebSockets are a TCP-based protocol and as such don't suffer from this HTTP-level connection limit (but, of course, browser support is not uniform).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With