I have a Java 8 / Spring4-based web application that is reporting the progress of a long-running process using Server Sent Events (SSEs) to a browser-based client running some Javascript and updating a progress bar. In my development environment and on our development server, the SSEs arrive in near-real-time at the client. I can see them arriving (along with their timestamps) using Chrome dev tools and the progress bar updates smoothly.
However, when I deploy to our production environment, I observe different behaviour. The events do not arrive at the browser until the long-running process completes. Then they all arrive in a burst (the events all have the timestamps within a few hundred milliseconds of each other according to dev tools). The progress bar is stuck at 0% for the duration and then skips to 100% really quickly. Meanwhile, my server logs tell me the events were generated and sent at regular intervals.
Here's the relevant server side code:
public class LongRunningProcess extends Thread {
private SseEmitter emitter;
public LongRunningProcess(SseEmitter emitter) {
this.emitter = emitter;
}
public void run() {
...
// Sample event, representing 10% progress
SseEventBuilder event = SseEmitter.event();
event.name("progress");
event.data("{ \"progress\": 10 }"); // Hand-coded JSON
emitter.send(event);
...
}
}
@RestController
public class UploadController {
@GetMapping("/start")
public SseEmitter start() {
SseEmitter emitter = new SseEmitter();
LongRunningProcess process = new LongRunningProcess(emitter);
process.start();
return emitter;
}
}
Here's the relevant client-side Javascript:
EventSource src = new EventSource("https://www.example.com/app/start");
src.addEventListener('progress', function(event) {
// Process event.data and update progress bar accordingly
});
I believe my code is fairly typical and it works just fine in DEV. However if anyone can see an issue let me know.
The issue could be related to the configuration of our production servers. DEV and PROD are all running the same version of Tomcat. However, some of them are accessed via a load balancer (F5 in out case). Almost all of them are behind a CDN (Akamai in our case). Could there be some part of this setup that causes the SSEs to be buffered (or queued or cached) that might produce what I'm seeing?
Following up on the infrastructure configuration idea, I've observed the following in the response headers. In the development environment, my browser receives:
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Connection: Keep-Alive
Content-Type: text/event-stream;charset=UTF-8
Keep-Alive: timeout=15, max=99
Pragma: no-cache
Server: Apache
Transfer-Encoding: chunked
Via: 1.1 example.com
This is what I'd expect for an event stream. A chunked response of an unknown content length. In the production environment, my browser receives something different:
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Connection: keep-alive
Content-Type: text/event-stream;charset=UTF-8
Content-Encoding: gzip
Content-Length: 318
Pragma: no-cache
Vary: Accept-Encoding
Here the returned content has a known length and is compressed. I don't think this should happen for an event stream. It would appear that something is converting my event stream into single file. Any thoughts on how I can figure out what's doing this?
So what are Server-Sent Events? A client subscribes to a “stream” from a server and the server will send messages (“event-stream”) to the client until the server or the client closes the stream. It is up to the server to decide when and what to send the client, for instance, as soon as data changes.
Obviously, the major difference between WebSockets and Server-Sent Events is that WebSockets are bidirectional (allowing communication between the client and the server) while SSEs are mono-directional (only allowing the client to receive data from the server).
What are Server-Sent Events? SSE definition states that it is an http standard that allows a web application to handle a unidirectional event stream and receive updates whenever the server emits data. In simple terms, it is a mechanism for unidirectional event streaming.
It took a significant amount of investigation to determine that the cause of the issue was the elements in our network path. So the code above is correct and safe to use. If you find SSE buffering you will most likely want to check the configuration of key networking elements.
In my case, it was Akamai as our CDN and the use of an F5 device as a load balancer. Indeed it was the fact that both can introduce buffering that made it quite difficult to diagnose the issue.
Akamai Edge servers buffer event streams by default. This can be disabled through the use of Akamai's advanced metadata and controlled via custom behaviours. At this time, this cannot be controlled directly through Amakai's portal, so you will need to get their engineers to do some of the work for you.
F5 devices appear to default to buffering response data as well. Fortunately, this is quite simple to change and can be done yourself via the device's configuration portal. For the virtual device in question, go to Profile : Services : HTTP
and change the configuration of Response Chunking to Preserve (in our case it had defaulted to Selective).
Once I made these changes, I began to receive SSEs in near real-time from our PROD servers (and not just our DEV servers).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With