Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Implementation of server-side responses to long polling via REST API

Tags:

rest

http

Say you are designing a REST API over HTTP for a server "room" where subscribing clients want to monitor public events happening to the room (e.g. a new participant joins the room, another one leaves the room, and so on...) by making long poll requests.

What is the best way to implement this from a server side point of view so that the client will not miss any events between consecutive polls? For example, should the server implement a queue of events which need to exist in the queue until all the subscribers have got them?

Are there any tutorials, examples, some theory on internet about designing such an API and all the things that should be taken into account from the server perspective?

like image 681
Martin Avatar asked Jul 10 '16 10:07

Martin


People also ask

How is long polling implementation?

Rather than having to repeat this process multiple times for every client until new data for a given client becomes available, long polling is a technique where the server elects to hold a client's connection open for as long as possible, delivering a response only after data becomes available or a timeout threshold ...

What is long polling in rest?

Long polling is a method that server applications use to hold a client connection until information becomes available. This is often used when a server must call a downstream service to get information and await a result.

How polling is implemented?

This is how polling works in a nutshell: The client makes a request to the server just like a simple HTTP request. The server responds to the client but has not finished processing the request. The client polls the server after some interval to see if the request has been processed.

What is polling technology in making an HTTP request to the server?

To overcome this deficiency, Web app developers can implement a technique called HTTP long polling, where the client polls the server requesting new information. The server holds the request open until new data is available. Once available, the server responds and sends the new information.


3 Answers

Very short answer - why not just use EventStore?

Short answer - why not just use Event Store as a reference implementation, and adapt their solution to match your implementation constraints?

What is the best way to implement this from a server side point of view so that the client will not miss any events between consecutive polls? For example, should the server implement a queue of events which need to exist in the queue until all the subscribers have got them?

REST by itself offers a few guidelines. There should be no application state stored on the server; the message sent by the client should include any client side state (like current position in the event stream) that the server will need to fulfill the request. The resource identified in the request is an abstraction - so the client can send messages to, for example "the event that comes after event 7", which makes sense even if that next event doesn't exist yet. The uniform interface should be respected, to allow for scaling via caches and the like that are outside of the control of the server. The representation of the state of the resource should be hypermedia, with controls that allow the client to advance after it has consumed the currently available messages.

HTTP throws in a few more specifics. Since there is no tracking of client state on the server, reading from the queue is a safe operation. Therefore, one of the safe HTTP methods (GET, to be precise) should be used for the read. Since GET doesn't actually support content body in the request, the information that the server will need should all be packed into the header of the request.

In other words, the URI is used to specify the current position of the client in the event stream.

Atom Syndication provides a good hypermedia format for event processing - the event stream maps to a feed, events map to entries.

By itself, those pieces give you a big head start on an event processor that conforms to the REST architectural constraints. You just need to bolt long polling onto it.

To get a rough idea at how you might implement long polling on your own, you can take a look at the ticketing demo, written by Michael Barker (maintainer of LMAX Disruptor).

The basic plot in Michael's demo is that a single writer thread is tracking (a) all of the clients currently waiting for an update and (b) the local cache of events. That thread reads a batch of events, identifies which requests need to be notified, responds to each of those requests in turn, and then advances to process the next batch of events.

I tend to think of the local cache of events as a ring buffer (like the disruptor itself, but private to the writer thread). The writer thread knows (from the information in the HTTP request) the position of each client in the event stream. Comparing that position to the current pointer in the ring buffer, each pending request can be classified has

Far Past The position that the client is seeking has already been evicted from the cache. Redirect the client to a "cold" persistent copy of that location in the stream, where it can follow the hypermedia controls to catch up to the present.

Recent Past The position that the client is seeking is currently available in the cache, so immediately generate a response to the client with the events that are available, and dispatch that response.

Near future The position that the client is seeking is not available in the cache, but the writer anticipates being able to satisfy that request before the SLA expires. So we park the client until more events arrive.

Far future The position that the client is seeking is not available in the cache, and we don't anticipate that we will be able to satisfy the request in the allotted time. So we just respond now, and let the client decide what to do.

(If you get enough polling clients that you need to start scaling out the long polling server, you need to consider the case where those servers get out of sync, and a client gets directed from a fast server to one that has fallen behind. So you'll want to have instrumentation in place that lets you track how often this is happening, so that you can apply the appropriate remedy).

There are also some edge cases to consider -- if a very large batch comes in, then you may need to evict the events your clients are waiting on before you get a chance to send them.

like image 51
VoiceOfUnreason Avatar answered Oct 23 '22 08:10

VoiceOfUnreason


Simple, have the client pass in the timestamp (or id, or index) of the last message they received.

Requesting GET /rooms/5/messages returns all the messages the server knows about, like

[
  {
    "message": "hello",
    "timestamp": "2016-07-18T18:44:34Z"
  },
  {
    "message": "world",
    "timestamp": "2016-07-18T18:47:16Z"
  }
]

The client then long polls the server with GET /rooms/5/messages?since=2016-07-18T18:47:16Z which returns either all the messages since that time (if there are any) or blocks until the room has a new message.

like image 26
kag0 Avatar answered Oct 23 '22 08:10

kag0


Send reference number with all the events. Cleint will call with reference number of the latest event received. You will block long poll request if no event is available and respond once event is available again with new reference number. In Case events are already available it will return all events generated after the request reference number event.

like image 23
Asad Durrani Avatar answered Oct 23 '22 08:10

Asad Durrani